How to run raw SQL to deploy database changes - sql-server

We intend to create DACPAC files using SQL database projects and distribute them automatically to several environments, DEV/QA/PROD, using Azure Pipeline. I can make changes to the schema for a table, view, function, or procedure, but I'm not sure how we can update specific data in a table. I am sure this is very common use case but unfortunately I am having hard time implementing it.
Any idea how can I automate creating/updating/deleting a row for a table?
E.g.: update myTable set myColumn = 5 where someColumn = 'condition'

In your database project you can add a Post Deployment Script

Do not. Seriously. I found DACPAC always to be WAY too limiting for serious operations. Look how the SQL is generated and - realize how little control you have.
The standard approach is to have deployment scripts that you generate and that do the changes in the database, plus a table in the db tracking which have executed (possibly with a checksum so you do not need t change the name to update them).
You can easily generate them partially by schema compare (and then generate the change script), but those also allow you to do things like data scrubbing and multi step transformations that DACPAC by design cannot efficiently and easily do.
There are plenty of frameworks for this around. They generally belong in the category of developer tools.

Related

Is it possible to automatically generate migration scripts by comparing db and code?

I’m seriously confused about how flyway generally works to maintain the db as a code. Suppose I have the following V0 script:
Create table student(
Name varchar(25)
)
That would be my initial db. Now, suppose I want to add a new column, why am I being forced to do a V1 script like this one?
Alter table student add column surname varchar(25)
What I’d like to do would be to simply update the v0 script like this:
Create table student(
Name varchar(25),
Surname varchar(25)
)
Then the tool, by comparing the actual db, should be able to understand that a new column should be created!
This is as other code (java, javasctipt,..) tools work and the same I would it like to be for db as a code tools.
So my question is: is there a way to achieve this behavior without dropping/recreating the db?
I tagged this question with flyway and liquibase tools but feel free to suggest other tools that would fit my needs.
Whatever way you develop the database,there is no way to achieve this behavior without dropping/recreating the db, because the CREATE TABLE statement assumes that the table that you specify isn't already there. You can't use a CREATE OR ALTER statement because these aren't supported for tables even where the RDBMS that you use supports that syntax.
In the early stages of a database project, you can work very much quicker with a build script that you use to create a database with tables, views and so on. You can then insert some data, try it out, run a few tests, maybe and then tear it down. Flyway community supports this: you just have a single migration script starting from an empty database that you repeatedly 'clean' and 'migrate', until you reach your first version. Flyway takes care of the 'Tear-down' process. and give you a fresh start, by wiping your configured schemas completely clean.
Flyway Teams supports a special type of migration, the 'repeatable' that allows you to use for migrations SQL files that you can alter. However, you would need to add logic that deletes the table if it already exists before it executes your CREATE TABLE statement. It avoids having to 'Flyway clean', but it is a lot of extra work. It also means that you lose the whole advantage of a version representing an exact state of a database.
At some point, you are going to use migrations because you're likely to have copies of the database to keep up-to-date. Whatever tool you use to update a development or producton database, you are going to need to use a migration for this because of the existing data in tables.
Flyway Enterprise supports the automatic generation of a migration, if you are using Oracle or SQL Server. SQL Compare is provided to compare two versions of a database and produce a migration script from one version to the next. This allows you to use a build script as you suggest, compare it with the current version of the database, and generate a migration script to get from the one to the other.

Is it possible to emit sql from grails/groovy newInstance/saves?

My team is looking into db migration tools (e.g., Flyway, Liquibase) and so I'm thinking about how to incorporate changes I make to the db contents using my groovy+grails service method. I'm not referring to changes to columns and/or tables (i.e., domain classes), I'm referring to inserts/updates of rows which represent configuration values for the associated webapp.
My service method is written to be used somewhat interactively. That is, when I'm adding or updating rows in various tables (i.e., newInstance or save), it helps me navigate various db constraints and to make sure all the foreign keys and my own business logic are set correctly. I run it repeatedly (rolling back each time afterwards using setRollbackOnly()) until I've found something I'm happy with. The method is written in groovy, and I don't want to rewrite it in sql.
Is there a way to get groovy/grails to emit the sql it would execute instead of executing the sql? That is, give me something I could copy/paste into a Flyway migration or Liquibase changeset?
I looked into logging, but I'd have to somehow process that output to substitute the values in and to get the proper column names, and even then I'd need a distinction between lines that I actually change the db (maybe I could just extract the inserts and updates). I also looked into these
grails database migration scripts, but they appear to either look at domain classes (which isn't where my changes are happening) or at the entire database (which would sweep up a lot of user data too).
Thanks!

Maintain SQL Server scripts

Our firm does not have a dedicated DBA employed but does have select developers performing DBA functions. We update our database often during a development cycle and have a release script with the various updates. We keep our db schema and objects in Visual Studio in a Database Project.
However, we often encounter two stumbling block problems that causes time-intensive manual intervention:
Developers cannot always sync from the Database Project to their local database because if we have added a NOT NULL field to an existing table that contains data then the Deploy process for VS to the db isn't smart enough to automagically insert "test" data just get the field into the table (unless this is a setting someplace?). We would of course follow this up, if possible, with a script to populate the field with real data, but we can't because the deployment fails.
Sometimes a developer will restore a backup from any past random date. There is no way of knowing exactly which db updates were applied to this database, so they don't know which scripts to start applying. What we do in this case is to check each script, chronologically, to see if the changes from that script have been applied to the database. If so, move on to the next script to run. Repeat.
One method we have discussed is potentially creating a "Database Update Level" table in the database with 1 field, 1 row. It would maintain the level that the database has been updated through. For example, when the first script is run, update the level to 2. In each db script, we would wrap the statements in a check such as
IF Database_Update_Level < 2 THEN
do some things here
UPDATE Database_Update_Level SET Database_Update_Level = 2
END IF
The db scripts can then be run on any database because the individual statement won't execute below a certain level.
This feels like we're missing something because this must be a common problem that every development shop that allows developers to develop locally encounters.
Any insights would be greatly appreciated.
Thanks.
about the restore problem, I don't see many solutions, you might try to prevent full restore and run scripts to populate the tables instead. As for versioning structures, do you use SSDT (SQL Server Data Tools) in VS ? You can generate DACPACs and generate diff scripts.
But what you say is that you also alter structures directly in the database ? No way to avoid that ? If not you could for example use DDL triggers (http://www.mssqltips.com/sqlservertip/2085/sql-server-ddl-triggers-to-track-all-database-changes/) to at least get notified that something changed.
One easy way to solve the NOT NULL problem is to establish default constraints (could just be an empty string, max number value for the data type, max date value, etc.). When the publish occurs the new column will be populated with the default value.
For the second issue I'd utilize post-deploy scripts in your SSDT project to keep the data in sync utilizing 'NOT EXISTS' to make incremental changes. That way, you can simply publish the database and allow the data updates to occur one after another.

Sql Server Project: Post deployment script(s)

I have a database project and I'm wondering what best practice is for adding pre-determined data, like statuses, types, etc...
Do I have 1 post deployment script for each status / type? OR
Do I have 1 post deployment script that uses :r someStatus.sql for each status/type script?
I suppose a 3rd option could be to have all inserts in one giant script but that seems awful to me. In the past, I've used option 2, but I'm not sure why it was done this way. Suggestions?
There's tools to package your data.
I have happily used RedGate SQL Packager (not free) and
DBUnit XML datafiles extracted from development environment and sent to the database with an Ant <dbunit> task.
For our scenario, we use a combination of #3 and #2. If we have a new build, we populate empty databases, set the post-deploy inserts that we normally use not to run, then populate the data after the entire build/publish. I tend to batch up related inserts as well so if I'm inserting 15 statuses, I add them in one script. The downside to that is that you need to make sure your script can be re-run and not cause issues so inserting into a temp table, then doing a left join against your actual table may be the best solution. It keeps the number of scripts down to a more manageable size.
For incremental releases, I tend to batch inserts by Story (using Scrum) so related scripts go together. It also helps me know when a script has been run in production and can be safely removed from the project.
You may also want to look at having a "reference" database of some sort where you only store the reference values, then perhaps a tool such as Red-Gate's Data Compare to pull over the appropriate set of data. The Pro version can be automated/scripted so you may have an easier way to pull in new data for testing. This may be your best solution in the long run as you can easily set up which tables you want to copy and set filters on data.

Tools to update tables in SQL server 2000/2005

Is there any handy tool that can make updating tables easier? Usually I got an Excel file with the original value in one column and new value in another column. Then I write a formula in Excel to create the 'update' statement. Is there any way to simplify the updating task?
I believe the approach in SQL server 2000 and 2005 would be different, so could we discuss them both? Thanks.
In addition, these updates usually request by "non-programmer" (which means they don't understand SQL, so it may not feasible to let them do query), is there any tool that can let them update the table directly without having DBAs do this task? Also, that tool needs to limit the privilege to only modify certain tables. And better has a way rollback the change.
Create a DTS package that will import a csv file, make the updates and then archives the file. The user can drop the file in a specific folder designated for the task or this can be done by an ops person. Schedule the DTS to run every hour, day, etc.
In case your users would insist that they keep using Excel, you've got several different possibilities of getting the data transferred to SQL Server. My preferred one would be to use DTS/SSIS, as mentioned by buckbova.
However, another method is by using OPENROWSET(), which makes it possible to query your Excel file as if it was a table. I wrote a small article about it here: http://blog.hoegaerden.be/2010/03/29/retrieving-data-from-excel/
Another approach that hasn't been mentioned yet (I'm not a big fan of letting regular users edit data directly in the DB), any possibility of creating a small custom application for them?
There you go, a couple more possible solutions :-)
Valentino.
I think the best approach is to expose a view on your data accessible to users who are allowed to do updates, and set up triggers on the view to perform the actual updates on the underlying data. Restrict change to only the columns they should be changing.
This technique can work on SQL Server 2000 and 2005.
I would add audit triggers on the underlying tables so you can always track changes.
You'll have complete control, and they can connect to it with Access or whatever and perform their maintenance.
You could create some accounts in SQL Server for these users and limit their access to only certain tables and columns along with onlu select / update / insert privileges. Then you could create an access database with linked tables to these.

Resources