I am deploying a DACPAC via SqlPackage.exe to database servers that have a large volume of transaction replication in SQL Server. The DACPAC is built as the output of a SQL Server Database Project. When I attempt to deploy the DACPAC to the database with replication enabled the SqlPackage execution returns errors such as, Error SQL72035: [dbo].[SomeObject] is replicated and cannot be modified.
I found the parameter DoNotAlterReplicatedObjects which does not alter objects with replication turned on and would silence those errors, which isn't what I want to do. Instead, I want to alter all objects regardless of replication as part of the deployment.
The only option that I can think of to deploy the DACPAC to these replicated databases is to:
remove the replication through a script before deploying,
deploy the DACPAC via SqlPackage,
reconstruct the replication via scripts after deploying.
Unfortunately, the database is so heavily replicated that the step #3 above would take over 7 hours to complete. So this is not a practical solution.
Is there a better way to use SQL Server Database Projects and DACPACs to deploy to databases with a lot of replication?
Any assistance would be appreciated. Thank you in advance for your advice.
We solved the issue by doing the following. Hopefully this will work for others as well. The high level idea is that you need to disable "Do not ALTER replicated objects" and enable "Ignore column order".
There's a couple of ways to do this.
If you are using the SqlPackage tool in your deployment pipeline, then use the DoNotAlterReplicatedObjects and IgnoreColumnOrder properties see this link. So /p:DoNotAlterReplicatedObjects=False /p:IgnoreColumnOrder=True
If you are using C# or PowerShell Dac classes, then use the DacDeployOptions.DoNotAlterReplicatedObjects and DacDeployOptions.IgnoreColumnOrder properties.
You can directly modify the "Advanced Publish Setting" in Visual Studio IDE. You uncheck the Do not ALTER replicated objects checkbox and enable the Ignore column order checkbox. See this StackOverflow answer for how an example for the Ignore checkbox.
Our theory on why this works is that the alter table can only append a column to the end of a table so the only way to add a column to a specific position is to drop and recreate the table. The ignore tells the publisher to append the column to the end regardless of where I positioned the column in the script.
So the place this could be a problem is if you do an insert without specifying a column list because you expect the columns to be in a specific order and they're not.
Another potential side-effect that you could run in to is the table created by the DACPAC could have a different column order than the table altered by the DACPAC. We have been using this solution for a few months without issues, but the above are things to be aware of.
I hope that this helps.
Related
We just trying to implement SSDT in our project.
We have lots of clients for one of our products which is built on a single DB (DBDB) with tables and stored procedures only.
We created one SSDT project for database DBDB (using VS 2012 > SQL Server object Browser > right click on project > New Project).
Once we build that project it creates one .sql file.
Problem: if we run that file on client's DBDB - it creates all the tables again & it deletes all records in it [this fulfills the requirements but deletes the existing records :-( ]
What we need: only the update which is not present on the client's DBDB should get update with new changes.
Note : we have no direct access to client's DBDB database for comparing with our latest DBDB. We only can send them some magic script file which will update their DBDB to the latest state.
The only way to update the Client's DB is to compare the DB schemas and then apply the delta. Any way you do it, you will need some way to get a hold on the schema thats running at the client:
IF you ship a versioned product, it is easiest to deploy version N-1 of that to your development server and compare that to the version N you are going to ship. This way, SSDT can generate the migration script you need to ship to the client to pull that DB up to the current schema.
IF you don't have a versioned product, or your client might have altered the schema or you will need to find a way to extract the schema data on site (maybe using SSDT there) and then let SSDT create the delta.
Option: You can skip using the compare feature of SSDT altogether. But then you need to write your migration script yourself. For each modification to the schema, you need to write the DDL statements yourself and wrap them in if clauses that check for the old state so the changes will only be made once and if the old state exists. This way, it doesnt really matter from wich state to wich state you are going as the script will determine for each step if and what to do.
The last is the most flexible, but requires deep testing in its own and of course should have started way before the situation you are in now, where you don't know what the changes have been anymore. But it can help for next time.
This only applies to schema changes on the tables, because you can always fall back to just drop and recreate ALL stored procedures since there is nothing lost in dropping them.
It sounds like you may not be pushing the changes correctly. You have a couple of options if you've built a SQL Project.
Give them the dacpac and have them use SQLPackage to update their own database.
Generate an update script against your customer's "current" version and give that to them.
In any case, it sounds like your publish option might be set to drop and recreate the database each time. I've written quite a few articles on SSDT SQL Projects and getting started that might be helpful here: http://schottsql.blogspot.com/2013/10/all-ssdt-articles.html
I have recently created a database project in VS2010 for an existing SQL Server 2008 R2 DB. I have updated 1 table out of 11 by adding 3 new columns to the end. I then updated 4 views that referred to that table.
I then tried a Build/Deploy with it only generating a script.
I have inspected the script and for every single table in the DB, it has generated code that will create a temp version of each table, copy the data from the existing table, drop the original and rename the copy.
I saw the posting on here where it insisted on rebuilding the table for dropped columns and I tried setting the IgnoreColumnOrder but it didn't make any difference. It didn't seem relevant to my situation, anyway, so I wasn't surprised.
I created my DB project by getting the DBA to give me a fully scripted version of Production, built that DB on my PC version of SQL Server and then created my initial project from that. I don't think that would make any difference and I have compared the project definition of the tables to the target Dev DB and they are the same.
I have "Always recreate database" unticked and "Block incremental deployment if data loss might occur" ticked. Don't suppose they have anything to do with my issue?
Any ideas?
I found a backup of the database and as per Peter's suggestion, ran a Schema Compare. The difference turned out to be that the target DB had PAGE compression on most of the tables but that was not in the project definition.
Our firm does not have a dedicated DBA employed but does have select developers performing DBA functions. We update our database often during a development cycle and have a release script with the various updates. We keep our db schema and objects in Visual Studio in a Database Project.
However, we often encounter two stumbling block problems that causes time-intensive manual intervention:
Developers cannot always sync from the Database Project to their local database because if we have added a NOT NULL field to an existing table that contains data then the Deploy process for VS to the db isn't smart enough to automagically insert "test" data just get the field into the table (unless this is a setting someplace?). We would of course follow this up, if possible, with a script to populate the field with real data, but we can't because the deployment fails.
Sometimes a developer will restore a backup from any past random date. There is no way of knowing exactly which db updates were applied to this database, so they don't know which scripts to start applying. What we do in this case is to check each script, chronologically, to see if the changes from that script have been applied to the database. If so, move on to the next script to run. Repeat.
One method we have discussed is potentially creating a "Database Update Level" table in the database with 1 field, 1 row. It would maintain the level that the database has been updated through. For example, when the first script is run, update the level to 2. In each db script, we would wrap the statements in a check such as
IF Database_Update_Level < 2 THEN
do some things here
UPDATE Database_Update_Level SET Database_Update_Level = 2
END IF
The db scripts can then be run on any database because the individual statement won't execute below a certain level.
This feels like we're missing something because this must be a common problem that every development shop that allows developers to develop locally encounters.
Any insights would be greatly appreciated.
Thanks.
about the restore problem, I don't see many solutions, you might try to prevent full restore and run scripts to populate the tables instead. As for versioning structures, do you use SSDT (SQL Server Data Tools) in VS ? You can generate DACPACs and generate diff scripts.
But what you say is that you also alter structures directly in the database ? No way to avoid that ? If not you could for example use DDL triggers (http://www.mssqltips.com/sqlservertip/2085/sql-server-ddl-triggers-to-track-all-database-changes/) to at least get notified that something changed.
One easy way to solve the NOT NULL problem is to establish default constraints (could just be an empty string, max number value for the data type, max date value, etc.). When the publish occurs the new column will be populated with the default value.
For the second issue I'd utilize post-deploy scripts in your SSDT project to keep the data in sync utilizing 'NOT EXISTS' to make incremental changes. That way, you can simply publish the database and allow the data updates to occur one after another.
I have a sql server 2005 database that I want to setup replication for. The problem is that the database has two schemas both of which have a table with the same name in it.
For some reason even though the tables are in different schemas the replication creation fails when done through management studio due to conflicting article names (i assume its trying to create the same name for both tables in the different schemas).
Is there any workaround for doing this in the studio, I can probably write a script or program to do this but just for this one thign is a bit annoying and it probably wont be allowed to run in production.
Perhaps there is a hot fix or something I'm not aware about?
Cheers,
There doesn't appear to be a way around this purely using the new publication wizard in SSMS - the article name is always the table name without a schema-qualifier, and can't be customised from the wizard - although there is a work-around if you use the scripting options.
Go through the wizard as normal, but at the end of the process, untick the "create publication" option and select the "Generate script file..." option.
Once the file is created, open it and edit the article names so that they no longer conflict, then execute the script in the publication database.
could you think of having two publications for your database, each publication being linked to one of the schemas? Of course, this means that you'll have to define two different subscribers, one for each publication. The feasability of this proposal will of course highly depend on how you need to distribute your data among the subscribers, and on the way your users access the data
We have a production SQL Server 2005 database server with the production version of our application's database on it. I would like to be able to copy down the data contents of the production database to a development server for testing.
Several sites (and Microsoft's forums) suggest using the Backup/Restore options to copy databases from one server from another, but this solution is unworkable for several reasons (I don't have backup authority on our production database, I don't want to overwrite permissions on the development server, I don't want to overwrite structure changes on the development server, etc...)
I've tried using the SQL Import/Export Wizard in SQL Server 2005, but it always reports primary key violations. How can I copy the contents of a database from the production server to development without using the "Backup/Restore" method?
Well without the proper rights it really becomes more tedious and less than ideal.
One way that I would recommend though is to drop all of your constraints and indexes and then add them again once the data has been imported/exported.
Not an elegant solution but it'll process really fast.
EDIT:
Another option is to create an SSIS package where you specifically dump the tables in an order that won't violate the constraints.
I often use SQL Data Compare (http://www.red-gate.com/products/sql_data_compare/index.htm) for this task: the synchronization scripts it writes will remove the relationships during the transfer and reapply them, but that is OK in most development cases. It works especially well with smaller databases or subsets of databases.
If your database is large, I would recommend finding someone with the keys to the kingdom. Doing an out of sequence backup could mess with the ability to restore the database from the primary backup (if they are doing partials during the week for example) by marking records backed up when they are only in your backup, so don't try to bypass that security if you are unsure why it is there.
Assuming that you can connect to both DB's from the same machine (which almost always you can - I do it with my production servers via a VPN).
For each table
DELETE FROM devserv.dbo.tablename;
SET identity_insert [devserv.dbo.tablename] ON;
INSERT into devserv.dbo.tablename SELECT * from prodserv.dbo.tablename;
SET identity_insert [devname.dbo.tablename] OFF;
It is obviously worth noting that you will need to do this in a certain order if your tables have foreign key constraints.
The import/ export wizard is notorious for this sort of thing, and actually has a bug that makes it even less useful in working out the dependencies (sorry, don't have the details to hand).
SSIS does a much better job, but you'll have to add each table copy task by hand (in fact a datasource, copy task and data destination objects. It's a little tedious to set up (more than it should be), but a lot simpler than writing your own code.
One tip: avoid generating an SSIS project with the import/ export wizard, thinking it will be easier to just tweak it. It generates something that most people would find unrecognisable, even with some SSIS experience!
If you do not have backup permission on the production server, I guess this is because you are using a shared SQL Server from a webhoster. In this case, check if your webhoster provides the tool called myLittleBackup. It allows installing a db from one server to another in a few clicks...
I'd contact someone that does have access to backup the database. Permissions are usually there for a reason.
I might consider getting a backup as there will be one wether you run it or not (t least in theory a Prod DB is being backed up :) )
Then just restore to a brand new database on your dev box so you dont conflict with anything or anyone else.
If you restore to a new DB you could also pull the tables and data across manually if you wanted and since you create the DB you give yourself rights and it's all ok. There's a number of other methods, all tedious.
It is obviously worth noting that you will need to do this in a certain order if your tables have foreign key constraints.
We just use the SQL Server Database Publishing Wizard at work.
You would use this little utility to generate a T-SQL script that describes your production database (including all its data). Then connect to your dev server and run the generated script.
If you have to avoid backup/restore this is what I would recommend (these steps assuming you don't want to maintain the old schema NAME, just the structure) -
Download opendbdiff. Choose 'Compare' between source and (empty) destination. Choose sync. script tab and copy only the create table rows (without dbo.sysdiagrams tables etc.) paste into sql managment studio new query, delete all the schemas names appearing before the table names.
Now you have the full structure including primary keys, identity etc. Next step - use sql server import and export data like you did before (make sure you choose edit mappings and choose destination schema as dbo etc.). Also make sure you tick drop and recreate destination table.
On your Dev machine, setup a linked server to your production machine. Then just
INSERT dev.db.dbo.table (fieldlist)
SELECT (fieldlist) from prod.db.dbo.table