Our application uses a single code base backed by client-specific databases.
What we are trying to achieve is code deployment using usual code push on the IIS website and DB deployments using SQL Dacpac for Schema Only changes on Azure DevOps.
Here the issue is that some of the changes do not go to all the client's databases simultaneously. What we need is a capability to select which would be the target databases for our current release.
Sometimes we will be releasing changes(Schema Only) to all of them, sometimes to few of them.
One way is to create separate release pipelines for all the databases and release them one by one.
Is there a way we can include checkboxes in the release itself, that every release asks me which all db these changes should go?
Another possible solution is finding a way by which I can call 5-10 release pipelines(Each for different DB release) while creating a release from my Main pipeline and have some kind of checkboxes for the releases using which I can pick which ones to do and which ones to skip for this release.
I need suggestions/best industry practices for this scenario.
Yes, there is. You can configure one release pipeline to have a SQL Server Database Deploy task for each database project. When you use that pipeline create a release, DevOps provides the flexibility by allowing you to enable or disable each task for that specific release. Once you have the release pipeline created, the process is:
Select your release pipeline
Create release
Edit release (not pipeline)
Right-click on each SQL Server database deploy task and Enable or Disable as needed
Save
Deploy
You could do it by adding conditionals on each task step that represents deployment to one of yours databases
steps:
- task: PowerShell#2
condition: eq(variables['deployToDb1'], true)
inputs:
targetType: 'inline'
script: |
Write-Host "Release to DB 1"
- task: PowerShell#2
condition: eq(variables['deployToDb2'], true)
inputs:
targetType: 'inline'
script: |
Write-Host "Release to DB 2"
Variables deployToDb1 and deployToDb2 are defined using UI on Edit Pipeline page and could be overwritten later on the release run
Related
I have a SQL Server database project (.sqlproj) which I am using as part of a CI/CD pipeline to deploy database changes. I would like to deploy the same code to two databases (Dev and Production) but each with a slightly different configuration:
In Dev, I have an Azure AD group Database-Dev-Developers:
CREATE USER [Database-Dev-Developers] FOR EXTERNAL PROVIDER;
In Production, I have an Azure AD group Database-Prod-Developers:
CREATE USER [Database-Prod-Developers] FOR EXTERNAL PROVIDER;
I can find no way to alter which scripts are build/published based on the configuration. Ideally I'd like to be able to specify the project configuration at build time (Debug/Release), which changes the output.
I have tried adding conditional expressions for the relevant files in the .sqlproj file, but this has no effect:
Condition=" '$(Configuration)' == 'Debug' "
You should look into using a Token Replacement step in your pipeline. You can add different variable values for Dev vs Prod to replace the tokens with. Then you just need one tokenized configuration file that can be used for both Dev and Prod.
I'm not exactly sure how kosher it is to use tokens in a .sqlproj file, it depends on what configurations you're trying to replace. But I've seen it used very successfully on ...config.json files in modern .NET Core based projects.
Another thing you can look into is File Transformations. I don't have any experience using these though.
I have found a partial solution to this problem. One can create a publish profile, which contains instructions to ignore certain object types. See this helpful blog post which details the process, summarised below:
In Visual Studio, right-click SSDT project
Publish -> Advanced
Select the 'drop' tab, and check 'Drop objects in target but not in source'
Check 'Do not drop...' next to the object types you wish to ignore. For me this was 'Do not drop users' and 'Do not drop roles'
Save the publish profile
Extra step for Azure DevOps Azure SQL Database deployment task, specify the generated publish profile xml file in the 'Publish Profile' setting.
This has the drawback that non-sensitive security settings (such as role membership) cannot be deployed, but this was a trade-off I was willing to make in my situation.
So I figured out how to disable the triggers from being part of a DACPAC, but unfortunately that's just a user level setting in Visual Studio, so I can't save it to source control and deploy the solution with those changes.
There's got to be some way to filter out the triggers from the DACPAC in the definition of the deployment pipeline task. But I have no idea where to start; all I see are some text fields and none of them are even generic "command line options" fields...
We are supporting several microservices written in Java using Spring Boot and deployed in OpenShift. Some microservices communicate with databases. We often run a single microservice in multiple pods in a single deployment. When each microservice starts, it starts liquibase, which tries to update the database. The problem is that sometimes one pod fails while waiting for the changelog lock.
When this happens in our production OpenShift cluster, we expect other pods to fail while restarting because of the same problem with changelog lock issue. So, in the worst case scenario, all pods will wait for the lock to be lifted.
We want liquidbase to automatically prepare our database schemas when each pod is starting.
Is it good to store this logic in every microservice? How can we automatically solve the problem when the liquidbase changelog lock problem appears? Do we need to put the database preparation logic in a separate deployment?
So maybe I should paraphrase my question. What is the best way to run db migration in term of microservice architecture? Maybe we should not use db migration in each pod? Maybe it is better to do it with separate deployment or do it with some extra Jenkins job not in OpenShift at all?
We're running liquibase migrations as an init-container in Kubernetes. The problem with running Liquibase in micro-services is that Kubernetes will terminate the pod if the readiness probe is not successful before the configured timeout. In our case this happened sometimes during large DB migrations, which could take a few minutes to complete. Kubernetes will terminate the pod, leaving DATABASECHANGELOGLOCK in a locked state. With init-containers you will not have this problem. See https://www.liquibase.org/blog/using-liquibase-in-kubernetes for a detailed explanation.
UPDATE
Please take a look at this Liquibase extension, which replaces the StandardLockService, by using database locks: https://github.com/blagerweij/liquibase-sessionlock
This extension uses MySQL or Postgres user lock statements, which are automatically released when the database connection is closed (e.g. when the container is stopped unexpectedly). The only thing required to use the extension is to add a dependency to the library. Liquibase will automatically detect the improved LockService.
I'm not the author of the library, but I stumbled upon the library when I was searching for a solution. I helped the author by releasing the library to Maven central. Currently supports MySQL and PostgreSQL, but should be fairly easy to support other RDBMS.
When Liquibase kicks in during the spring-boot app deployment, it performs (on a very high level) the following steps:
lock the database (create a record in databasechangeloglock)
execute changeLogs;
remove database lock;
So if you interrupt application deployment while Liquibase is between steps 1 and 3, then your database will remain locked. So when you'll try to redeploy your app, Liquibase will fail, because it will treat your database as locked.
So you have to unlock the database before deploying the app again.
There are two options that I'm aware of:
Clear databasechangeloglock table or set locked to false. Which is DELETE FROM databasechangeloglock or UPDATE databasechangeloglock SET locked=0
Execute liquibase releaseLocks command. You can find documentation about it here and here.
We managed to solve this in my company by following also the same approach Liquibase suggests with Init Containers, but instead of using a new container and run the Liquibase migration via Liquibase CLI, we are reusing the existing Spring Boot service setup but just executing the Liquibase logic. We have created an alternative main class that can be used in an entrypoint to populate the database using Liquibase.
The InitContainerApplication class brings the minimal configuration required to start the application and set up Liquibase.
Typical usage:
entrypoint: "java -cp /app/extras/*:/app/WEB-INF/classes:/app/WEB-INF/lib/* com.backbase.buildingblocks.auxiliaryconfig.InitContainerApplication"
Here the class
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.ApplicationArguments;
import org.springframework.boot.ApplicationRunner;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.SpringBootConfiguration;
import org.springframework.boot.autoconfigure.ImportAutoConfiguration;
import org.springframework.context.ApplicationContext;
#SpringBootConfiguration
#ImportAutoConfiguration(InitContainerAutoConfigurationSelector.class)
public class InitContainerApplication implements ApplicationRunner {
#Autowired
private ApplicationContext appContext;
public static void main(String[] args) {
SpringApplication.run(InitContainerApplication.class, args);
}
#Override
public void run(ApplicationArguments args) throws Exception {
SpringApplication.exit(appContext, () -> 0);
}
}
Here is the use as an Init Container:
spec:
initContainers:
- name: init-liquibase
command: ['java']
args: ['-cp', '/app/extras/*:/app/WEB-INF/classes:/app/WEB-INF/lib/*',
'com.backbase.buildingblocks.auxiliaryconfig.InitContainerApplication']
Finally we solved this problem in another project by removing liquibase migration at microservice start time. Now separate Jenkins job apply the migration and separate Jenkins job deploy and start microservice after migration apply. So now microservice itself doesn’t apply database update
I encountered this issue when one of the Java applications I manage abruptly shut down.
The logs were displaying the error below when the application tries to start:
waiting to acquire changelock
Here's how I solved it
I fixed this issue by:
Stopping the application
Deleting the databasechangelog and databasechangelog.lock files in the database connected to the application.
Restarting the application
In my case the application was connected to 2 databases. I had to delete the databasechangelog and databasechangelog.lock files in the both databases and then restarted the application. The both database databasechangelog and databasechangelog.lock files have to be at sync.
After this the application was able to acquire changelock file.
My DBA and I are trying to work out how to effectively use Microsoft's Database projects and the Dacpacs they generate to simplify our production deployment system.
Ideally, I would be able to build and/or publish the .sqlproj, generating a .dacpac file, which can then be uploaded to the production server and used to upgrade the database from whatever version it was to the latest version. This is similar to how we're doing website deployments, where I publish to a package, and then that package is uploaded to the server and imported into IIS.
However, we can't work out how to make this work. The DBA has already created the database and added it to our Availability Groups. And every time we try to apply the Dacpac, it tries to adjust settings which it can't because of the AGs.
Nothing I've been able to do has managed to create a .dacpac file which doesn't try to impose settings on the database. The closest option I've found will exclude them when publishing, but as best as I can tell you can't publish to an inaccessible database, and only the DBA has access to the production server.
Can I actually use dacpacs this way?
There are two parts to this, firstly how do you stop deploying settings you don't want to deploy - can you give an example of one of the settings that doesn't apply?
For the second part where you do not have access to the SQL Server there are a few different ways to handle this:
Use an offline copy to generate the deploy script
Get the DBA to generate the deploy script
Get the DBA to deploy using the dacpac
Get read only access to the database
Option 1: "Use an offline copy to generate the deploy script"
You need to compare the dacpac to something and if you do not have a TDS connection (default instance default port tcp:1433) then you can use a version of the database that matches production either through:
Use log shipping to restore a copy of production somewhere you can access it
Get a development db and production in sync, then every release goes to the dev and prod databases, ensuring that they stay in sync
The log shipped copy is the easiest, if it is to a development server you can normally have server permissions to give you acesss or you can create the correct permissions at the database level but not on the production server level.
If the data is sensitive then the log shipped copy might not be appropriate so you could try to keep a development and production database in sync but this is difficult and requires that the DBA be "well trained" into not running anything that isn't first run against the db database as well.
Once you have access to a database that has exactly the same schema as the production database you can use sqlpackage.exe /action:script to generate a deploy script, in fact because it isn't the production database you can generate the script as part of your CI process :).
Option 2: "Get the DBA to generate the deploy script"
This is to get the DBA to copy the dacpac to the productions server and to use sqlpackage.exe that will be in "Program Files (x86)\Microsoft Sql Server\Version\DAC\bin" folder to compare the dacpac to the database and generate a script that he can review before deploying.
Option 3: "Get the DBA to generate the deploy script"
This is simlar to option 2 but instead of generating a script he deploys in SSMS he just use sqlpackage.exe /Action:Publish to deploy the changes directly.
Option 4: "Get read only access to the database"
This is actually my preferred as it means that you always build scripts against what is guaranteed to be the state of production (as it is production). In your case you would need to get the tcp port between your machine or ideally your build machine and the SQL Server and then you will need these permissions:
https://the.agilesql.club/Blogs/Ed-Elliott/What-Permissions-Do-I-Need-To-Generate-A-Deploy-Script-With-SSDT
As I said option 4 is always my preferred but I understand that it isn't always possible.
Option 2 + 3 are fraught with worry as you will be running scripts that haven't been tested anywhere, with option 4 and 1 you can generate the scripts and then deploy to a test / QA database as long as they themselves have the same schema as production. The scripts can also go through a code review process.
If you do option 2 / 3 then I would create a batch file or powershell script that drives sqlpackage.exe and if they deploy from a different server that doens't have sqlpackage.exe then you can copy the DAC folder to that machine and run sqlpackage from that, you do not have to actually install it (you may need to also copy in the Microsoft.SqlServer.TransactSql.ScriptDom.dll from the "Program Files (x86)\Microsoft Sql Server\Version\SDK\Assemblies" folder.
I hope this helps, if you have any more questions feel free to post here or ping me :)
ed
Suppose you have a database project and you do NOT have "Always re-create database" checked off in your Database.sqldeployment settings. And suppose you deploy to a server that already has a database by the name of the one you are deploying.
Under what other circumstances will the database deploy generate a script with a "DROP DATABASE" statement?
If you don't ever, ever, ever want your database to be dropped by the deployment script generated by right clicking your database project and selecting "Deploy", what are some of the steps you can take to prevent this?
In addition to the "Always re-create database" NOT being checked off, you should also check the Development tab on your database project's Properties page. Make sure you define a target connection. When you don't define one the project will always and only deploy as-if the target database does not exist. This behavior is by design. see this link for more details.
My suggestion is to create the connection using Windows Authentication so each user would have access to the extend they are supposed to.
Also please note that you will have to do this for each Deployment Configuration (e.g. Debug, Release, etc.)
I personally set the deploy action to just create a script and run it manually to be on the safe side!