DACPAC pre-pre script avilale? - sql-server

I am facing an issue while publishing DACPAC. (SQL 2012)
I have table in production that is having null,not null values column.
I need to make this column not nullable and drop another column.
Since that may not be using in the application.
I tried pre-deployment script to update null column with values, alter column and delete column, but getting error-
"Rows were detected. The schema update is terminating because data loss might occur."
Since this is a sensitive production, I don't want to uncheck the publish option 'Block incremental deployment
if data loss might occur'.
Please let me know if anyone have any idea?

Let us first understand how DACPACs are deployed, These are the order of tasks
1.Build a empty DB using the schema from *.dacpack file, call it "vNextDB"
2.Compare the production DB to vNextDB, and generate a script that will deploy changes, let us call it deploy.sql (you can actually write this to a file and inspect it manually, look for documentation on "SqlPackage.exe")
3.Run Pre-Deployment Script
4.Run deploy.sql from step (2)
5.Run Post-Deployment Script
Looking at the sequence above, you probably already figured the issue. if not, look at when the deploy.sql script is generated, it is done before the pre-deplyment script is executed, so obviously it does see the NULL values and generates script that backs off from deployment.
You can see 'how' it backs off if you make sqlproject.exe write the deploy.sql to disk and open it in a text editor (i can't say how it does the backing off because each version of sqlpackage.exe does it differently. they keep improving it).
Here are couple of options
A) Split your deployment into two, First deploymentwill have no schema change, it just runs "UPDATE MyTable SET theColumn = '' WHERE theColumn IS NULL" in post/pre deployment step, and the second deployment will have the actual changes
B) Take a step back and think comprehensively about how you are going to handle versioning of your DB schema, you should have a versioning strategy, you should think of backward compatibility and such stuff. (i.e You can introduce new column that is NOT NULL with default values and the DB clients can start using that column, while a old client that cant be updated will still be supported because you are not removing the old column right away)
Even if you choose option A to work around the current issue, long term you should establish a versioning startegy. Every self respecting data store has a versioning strategy :-), you should give one to your DB.

Related

How to run raw SQL to deploy database changes

We intend to create DACPAC files using SQL database projects and distribute them automatically to several environments, DEV/QA/PROD, using Azure Pipeline. I can make changes to the schema for a table, view, function, or procedure, but I'm not sure how we can update specific data in a table. I am sure this is very common use case but unfortunately I am having hard time implementing it.
Any idea how can I automate creating/updating/deleting a row for a table?
E.g.: update myTable set myColumn = 5 where someColumn = 'condition'
In your database project you can add a Post Deployment Script
Do not. Seriously. I found DACPAC always to be WAY too limiting for serious operations. Look how the SQL is generated and - realize how little control you have.
The standard approach is to have deployment scripts that you generate and that do the changes in the database, plus a table in the db tracking which have executed (possibly with a checksum so you do not need t change the name to update them).
You can easily generate them partially by schema compare (and then generate the change script), but those also allow you to do things like data scrubbing and multi step transformations that DACPAC by design cannot efficiently and easily do.
There are plenty of frameworks for this around. They generally belong in the category of developer tools.

SSDT implementation: Alter table insteed of Create

We just trying to implement SSDT in our project.
We have lots of clients for one of our products which is built on a single DB (DBDB) with tables and stored procedures only.
We created one SSDT project for database DBDB (using VS 2012 > SQL Server object Browser > right click on project > New Project).
Once we build that project it creates one .sql file.
Problem: if we run that file on client's DBDB - it creates all the tables again & it deletes all records in it [this fulfills the requirements but deletes the existing records :-( ]
What we need: only the update which is not present on the client's DBDB should get update with new changes.
Note : we have no direct access to client's DBDB database for comparing with our latest DBDB. We only can send them some magic script file which will update their DBDB to the latest state.
The only way to update the Client's DB is to compare the DB schemas and then apply the delta. Any way you do it, you will need some way to get a hold on the schema thats running at the client:
IF you ship a versioned product, it is easiest to deploy version N-1 of that to your development server and compare that to the version N you are going to ship. This way, SSDT can generate the migration script you need to ship to the client to pull that DB up to the current schema.
IF you don't have a versioned product, or your client might have altered the schema or you will need to find a way to extract the schema data on site (maybe using SSDT there) and then let SSDT create the delta.
Option: You can skip using the compare feature of SSDT altogether. But then you need to write your migration script yourself. For each modification to the schema, you need to write the DDL statements yourself and wrap them in if clauses that check for the old state so the changes will only be made once and if the old state exists. This way, it doesnt really matter from wich state to wich state you are going as the script will determine for each step if and what to do.
The last is the most flexible, but requires deep testing in its own and of course should have started way before the situation you are in now, where you don't know what the changes have been anymore. But it can help for next time.
This only applies to schema changes on the tables, because you can always fall back to just drop and recreate ALL stored procedures since there is nothing lost in dropping them.
It sounds like you may not be pushing the changes correctly. You have a couple of options if you've built a SQL Project.
Give them the dacpac and have them use SQLPackage to update their own database.
Generate an update script against your customer's "current" version and give that to them.
In any case, it sounds like your publish option might be set to drop and recreate the database each time. I've written quite a few articles on SSDT SQL Projects and getting started that might be helpful here: http://schottsql.blogspot.com/2013/10/all-ssdt-articles.html

Maintain SQL Server scripts

Our firm does not have a dedicated DBA employed but does have select developers performing DBA functions. We update our database often during a development cycle and have a release script with the various updates. We keep our db schema and objects in Visual Studio in a Database Project.
However, we often encounter two stumbling block problems that causes time-intensive manual intervention:
Developers cannot always sync from the Database Project to their local database because if we have added a NOT NULL field to an existing table that contains data then the Deploy process for VS to the db isn't smart enough to automagically insert "test" data just get the field into the table (unless this is a setting someplace?). We would of course follow this up, if possible, with a script to populate the field with real data, but we can't because the deployment fails.
Sometimes a developer will restore a backup from any past random date. There is no way of knowing exactly which db updates were applied to this database, so they don't know which scripts to start applying. What we do in this case is to check each script, chronologically, to see if the changes from that script have been applied to the database. If so, move on to the next script to run. Repeat.
One method we have discussed is potentially creating a "Database Update Level" table in the database with 1 field, 1 row. It would maintain the level that the database has been updated through. For example, when the first script is run, update the level to 2. In each db script, we would wrap the statements in a check such as
IF Database_Update_Level < 2 THEN
do some things here
UPDATE Database_Update_Level SET Database_Update_Level = 2
END IF
The db scripts can then be run on any database because the individual statement won't execute below a certain level.
This feels like we're missing something because this must be a common problem that every development shop that allows developers to develop locally encounters.
Any insights would be greatly appreciated.
Thanks.
about the restore problem, I don't see many solutions, you might try to prevent full restore and run scripts to populate the tables instead. As for versioning structures, do you use SSDT (SQL Server Data Tools) in VS ? You can generate DACPACs and generate diff scripts.
But what you say is that you also alter structures directly in the database ? No way to avoid that ? If not you could for example use DDL triggers (http://www.mssqltips.com/sqlservertip/2085/sql-server-ddl-triggers-to-track-all-database-changes/) to at least get notified that something changed.
One easy way to solve the NOT NULL problem is to establish default constraints (could just be an empty string, max number value for the data type, max date value, etc.). When the publish occurs the new column will be populated with the default value.
For the second issue I'd utilize post-deploy scripts in your SSDT project to keep the data in sync utilizing 'NOT EXISTS' to make incremental changes. That way, you can simply publish the database and allow the data updates to occur one after another.

Sql Server Project: Post deployment script(s)

I have a database project and I'm wondering what best practice is for adding pre-determined data, like statuses, types, etc...
Do I have 1 post deployment script for each status / type? OR
Do I have 1 post deployment script that uses :r someStatus.sql for each status/type script?
I suppose a 3rd option could be to have all inserts in one giant script but that seems awful to me. In the past, I've used option 2, but I'm not sure why it was done this way. Suggestions?
There's tools to package your data.
I have happily used RedGate SQL Packager (not free) and
DBUnit XML datafiles extracted from development environment and sent to the database with an Ant <dbunit> task.
For our scenario, we use a combination of #3 and #2. If we have a new build, we populate empty databases, set the post-deploy inserts that we normally use not to run, then populate the data after the entire build/publish. I tend to batch up related inserts as well so if I'm inserting 15 statuses, I add them in one script. The downside to that is that you need to make sure your script can be re-run and not cause issues so inserting into a temp table, then doing a left join against your actual table may be the best solution. It keeps the number of scripts down to a more manageable size.
For incremental releases, I tend to batch inserts by Story (using Scrum) so related scripts go together. It also helps me know when a script has been run in production and can be safely removed from the project.
You may also want to look at having a "reference" database of some sort where you only store the reference values, then perhaps a tool such as Red-Gate's Data Compare to pull over the appropriate set of data. The Pro version can be automated/scripted so you may have an easier way to pull in new data for testing. This may be your best solution in the long run as you can easily set up which tables you want to copy and set filters on data.

Compare SQL Server DB schema & data (at the same time) and generate scripts

I've got a reasonably large / complicated DB which I need to upgrade in the field from version 1 to version 2. There's a lot of changes in schema and importantly data between the two.
Yes, I know this should have been version controlled alla:
http://www.codinghorror.com/blog/2008/02/get-your-database-under-version-control.html
but it wasn't - it will be when I am done.
So, current problem, I'm face with the choice of either going through all the commits or trying to diff between two versions of the db. So far I've tried:
http://opendbiff.codeplex.com/
http://www.sqldelta.com/
http://www.red-gate.com/
However none of them seem to be able to successfully generate schema upgrade scripts because they don't also do the data at the same time. This results in foreign key violations when adding new keys to tables as the table it references is new and while the schema for the table has been created, the data whcih it contains has not. Well it could be, but that requires me to use a different part of the tool and then mix together the two scripts.
I know this may look like a duplicate of:
What is best tool to compare two SQL Server databases (schema and data)?
which is where I found most of the existing tools I've tried, but so far I've not managed to get any of these to produce a working schema migration script (I'm really not too fussed about the data, but I do need the data which is required for foreign keys - which tbh is all the difference as I've deploy old version and new version).
Am I expecting too much?
Should I give up and start manually stitching together what I do have?
Or do I go through all the commits and manually create upgrade scripts?
I can't think of more powerful tools available than the ones you seem to have tried. If those fail, my homegrown versioning system probably won't help you much either.
However, you should be able to generate an update script and then manually edit it to add the data transformations to it.
And/or you could disable the foreign key constraints for the time that the update script runs.
There is no such thing as doing schema and data "at the same time". Even if you have them in one big script you would still be doing the schema first and then the data. If the schema script creates a new table and adds a constraint to it there is no reason you should get a referential integrity violation error as there are no rows in those tables.
In any case, you should give our xSQL Schema Compare and Data Compare tools a try, you will be impressed with the performance and the level of control you get.

Resources