in a DevOps (CI/CD) scenario, when Liquibase is triggered by a specific step of a pipeline, is a good practice that Liquibase drops all application ( microservice ) DB tables and recreate all DDL/DML using changesets (only for test and pre-production environment) ?\
If it is, why?
thanks
Liquibase is designed to maintain database model consistency between all environments.
And when you drop database in one of them you break that consistency as you wouldn't want to drop production database as well. Check out Roll Back the Database or Fix Forward? article
If you need to drop some table, you should write additional <dropTable> changeSet.
If you need to test initial application deployment on an empty database, you can always use containers, as suggested by #bilak in the comments.
Related
I believe database migrations are related to the objects of the database like tables, views, data etc.
Can we add / edit users and change their login passwords as part of the migration with flyway ? Is it a considered best practice?
Yes, anything that is valid SQL can be run in a migration.
However, the usual use case for Flyway is that migration scripts are stored somewhere permanent so that you have a trail of how the database got to its current state. You will need to take care that credentials are not accidentally exposed in source control (including history), collections of migration scripts on build servers, or anywhere else.
According to the Flyway docs:
SQL-based migrations are typically used for
DDL changes (CREATE/ALTER/DROP statements for TABLES,VIEWS,TRIGGERS,SEQUENCES,…)
Simple reference data changes (CRUD in reference data tables)
Simple bulk data changes (CRUD in regular data tables)
So yes the migrations can contain DML as well DDL.
I want to be able to create a database and tables the same way Amazon's DynamoDb client does it but with Sql Server. Is that possible?
I'm using .net Core and this is for integration tests. Figured I can throw in the code in the fixture.
Anyone have any ideas?
EF Core Migrations:
"The migrations feature in EF Core provides a way to incrementally
update the database schema to keep it in sync with the application's
data model while preserving existing data in the database."
Create and Drop APIs:
"The EnsureCreated and EnsureDeleted methods provide a lightweight
alternative to Migrations for managing the database schema. These
methods are useful in scenarios when the data is transient and can be
dropped when the schema changes. For example during prototyping, in
tests, or for local caches."
to create your tables at runtime.
And then use one of the Data Seeding techniques:
Data seeding is the process of populating a database with an initial
set of data. There are several ways this can be accomplished in EF
Core:
Model seed data
Manual migration customization
Custom initialization logic
to populate them with known data.
You could start the SQL Server (at least the logfiles) on a RAM disk. and/or use delayed durability ALTER DATABASE x SET DELAYED_DURABILITY = forced. You could also use memory optimized tables but I think you won’t get full compatibility.
BTW: it is dangerous to use such shortcuts if your development process relies entirely on it since developers very late get feedback on bad habits and performance problems.
For that kind of volatile databases (also applies to containers) you need to add some code to your test pipeline or product tomactually create and populate the DB. (If you use containers you can think about packaging a pre-populated DB snapshot)
Using
SQL Server 2008 (not R2)
Visual Studio 2012 Premium
SQL Server Database Project/SQL Server Data Tools (SSDT)
Are there any problems\potential issues with altering a table schema using a post-deployment script?
My company has a web application (App A) whose backend database has tables that are replicated to another company application's database (App B) using CDC Replication. Making schema changes to these tables causes SSDT to use DROP/CREATE when generating the deployment script. This is a problem for App B's database that uses CDC Replication on these tables, because when the table is dropped and recreated, App B's database's CT_[table_name] tables are dropped, bringing App B down. My solution is to use a post-deployment script to make ALTERations to these tables, instead of allowing SSDT to generate DROP/CREATE. Are there any potential problems or issues with this approach?
I could really use some help.
You could conceivably handle such table changes using a Post-Deployment script if you were to exclude those tables from the SSDT project model. This can be achieved in either of the following ways:
For each of the table files involved in CDC replication, set the Build Action property to None
Or simply remove the affected table files from the project altogether
This would prevent SSDT from attempting to perform any actions on that table at all, so you wouldn't have to worry about the comparison engine producing scripts that break your CDC instances.
Naturally, this would mean that any objects that depend on the excluded table objects (such as procs or views) would also need to be moved to Post-Deployment scripts. This would result in the traceability of the database being reduced, as all excluded those tables would no longer have per-file history stored in source control.
Even if a solution can be found that doesn't result in these drawbacks, for example using a pre-compare script per Ed's excellent blog post, there is the issue of deployment atomicity to consider. If part of the deployment occurs in the script that SSDT generates, and another part occurs in a Post-Deployment script, then it's possible for an error to occur that leaves the database in a half-deployed state. This is because SSDT only uses a transaction for the parts of the deployment that it is responsible for; anything included in a Post-Deployment script will be executed after the initial transaction is committed.
This means your Post-Deployment script needs to be written in an idempotent way, so that it can be re-executed if something goes wrong (sorry if that is a bit of an obvious statement... it just seems always like a good point to make whenever post-deploy scripts are mentioned!).
If a higher degree control over the way that your table changes are deployed is desired, without the potential loss of traceability or deployment atomicity, then may I suggest considering a migrations-driven deployment tool like ReadyRoll (disclaimer: I work for Redgate Software). ReadyRoll is a project sub-type of SSDT but uses quite a different deployment style to SSDT: instead of waiting until deployment to find out that the table will be dropped/recreated, the migration script is produced at development time, allowing changes to the sync operations to be made before committing it to source control.
For more information about how SSDT and ReadyRoll compare, have a look at the ReadyRoll FAQ:
http://www.red-gate.com/library/readyroll-frequently-asked-questions-faq
Are you using the in-built refactoring support to rename columns/tables? If you do then the deployment should generate a sp_rename.
If you are and it is a bug because of cdc:
raise a connect item
do your own alter but you will need to run a script before the deployment otherwise it will make the changes it want for you (I call it a pre-compare script)
See https://the.agilesql.club/Blog/Ed-Elliott/Pre-Compare-and-Pre-Deploy-Scripts-In-SSDT for more details.
Ed
Ok, so the problem is probably in my approach to liquibase, I have implemented some changes in the database side, and I want to create changesets, so I simply add a new sql file to my changesets. When I try to run luquibase update command I get error which tells me that some columns exist in the database.
For me is normal that before I create the changesets script I try to add columns in the database (i.e. using PhpMyAdmin). Then I want to share with this changes with other developers, so I generate sql (from my changes), adding this in the sql file and launching this file in changeset.
Can somebody tell me what I make wrong?
The problem concerns situation when I added some new columns to my mysql table, thenI created sql file whit alter_table script and thenI run liquibase update command.
Don't make manual updates in your database. All schema changes have to be done with liquibase or else - as in your case - your changesets will conflict with the existing schema.
While having all changes to your database be done with Liquibase before hand is ideal, there are certainly situations where that is not possible. One is the use case you've described. Another would be if a hotfix is applied to production and needs to be merged back to development.
If you are certain that your changeset has been applied to the environment, then consider running changelogSync. It will assert that all changesets have been applied and will update the Liquibase meta table with the appropriate information.
Although not ideal, we think that that changelogSync is required for real world applications where sometimes life does not progress as we would like. That's why we made certain to expose it clearly in Datical DB. We think it strikes a balance between reality and idealism.
I'm looking for some "Best Practices" for automating the deployment of Stored Procedures/Views/Functions/Table changes from source control. I'm using StarTeam & ANT so the labeling is taken care of; what I am looking for is how some of you have approached automating the pull of these objects from source - not necessarily StarTeam.
I'd like to end up with one script that can then be executed, checked in, and labeled.
I'm NOT asking for anyone to write that - just some ideas or approaches that have (or haven't) worked in the past.
I'm trying to clean up a mess and want to make sure I get this as close to "right" as I can.
We are storing the tables/views/functions etc. in individual files in StarTeam and our DB is SQL 2K5.
We use SQL Compare from redgate (http://www.red-gate.com/).
We have a production database, a development database and each developer has their own database.
The development database is synchronised with the changes a developer has made to their database when they check in their changes.
The developer also checks in a synchronisation script and a comparison report generated by SQL Compare.
When we deploy our application we simply synchronise the development database with the production database using SQL Compare.
This works for us because our application is for in-house use only. If this isn't your scenario then I would look at SQL Packager (also from redgate).
I prefer to separate views, procedures, and triggers (objects that can be re-created at will) from tables. For views, procedures, and triggers, just write a job that will check them out and re-create the latest.
For tables, I prefer to have a database version table with one row. Use that table to determine what new updates have not been applied. Then each update is applied and the version number is updated. If an update fails, you have only that update to check and you can re-run know that the earlier updates will not happen again.