We use different service accounts for the different deployment environments, so Dev has Account_A and Prod has Account_B, and any test app using Account_A will not have access to Prod. Or, as another example, Account_A can have read/write permissions in Dev, but only read permissions in Prod.
Up until now there has been no source control on the database definitions, just manual scripts everywhere, and I'd like to create a SSDT solution in Azure DevOps for this. I understand how you can set up releases to handle different database names across environments (Db_Dev vs Db_Prod, for example), but I'm not able to find anything about different users & permissions across environments.
Is this possible in SSDT? As far as I can tell, I have 2 options, but I'm hoping there's a better way:
Handle users and permissions outside of source control
Handle them somehow in a post-deployment script.
Caveat: I'm only talking about Windows Authentication users & groups. Passwords will obviously not be going into source control.
I wrote about this a long time ago here: https://schottsql.com/2013/05/14/ssdt-setting-different-permissions-per-environment/
You really are dealing with environment variables and a bunch of post-deploy scripts in order to do this. Your better option is to assign permissions to database roles so those are all consistent, then assign your users to those roles in each environment as appropriate - outside of SSDT. It's a lot less painful than trying to create/maintain logins and users in a series of post-deploy scripts in the long run.
On DevOps side, an environment is a collection of resources, such as Kubernetes clusters and virtual machines, that can be targeted by deployments from a pipeline. Typical examples of environment names are Dev, Test, QA, Staging, and Production. You can secure environments by specifying which users and pipelines are allowed to target an environment.
You can control who can create, view, use, and manage the environments
with user permissions. There are four roles - Creator (scope: all
environments), Reader, User, and Administrator. In the specific
environment's user permissions panel, you can set the permissions that
are inherited and you can override the roles for each environment.
More details, check the following link:
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/environments?view=azure-devops#security
Managing users and permissions in SSDT is a bit pain in the đ and usually they are not maintained there. However you still have options:
Create post-script as you mentioned (bad choice)
Create separate projects for each environment
What I mean by separate project is: you need to create separate project for each environment, then add reference of your main project (where all of your objects exist) and set reference type as "the same database". Then in that project you'll add all needed users/permissions/modifications. These projects will have their own publish profiles as well. 1 issue you might face as well is that his project should reference ALL databases/dacpacs as your main project as well.
Related
I am trying to understand the best way to migrate code when working with Snowflake. There are two scenarios, one where we have only one snowflake account and that houses all environments (dev, test, prod). And the other has two accounts (non-prod, prod). With the second option, I was planning to create a separate script that will set up the correct database name which is based on environment (dev_ent_dw, prod_ent_dw) and then I will refer these as variables when creating objects. Example -
set env = âdevâ;
set db = $env || â_ent_dw.â;
Right now weâre running everything manually so the devops team will run these upfront before running ddl scripts. We may do something similar with the former scenario but I am wondering if folks can share best practices of dealing with this as I am sure it would be common topic at large enterprises.
We have different accounts for each environment (dev, qa, prod). We use Azure DevOps for change management within our team, including the Git repos and Azure Pipelines for deploying scripts via schemachange.
We do NOT append an environment to objects, as that is handled by the account.
Developers write migration scripts and check them into source control. We then create Version folders and move/rename the migration scripts for deployment, and run a pipeline to execute the changes. The only thing we need to change is the URL to deploy against, and that is handled within the pipeline itself. The nice thing here is that we do not need to tweak anything going from different branches in source control.
We have not been doing much with creating clones for development work, as we usually only have one developer working on changes to a set of objects at a time. We are exploring ways to improve our process, but what we have works fairly well for our current needs.
In our current dev. workflow there is main database --> DbMain. There is the process that takes the latest version of the project and automatically deploys it there and after that it triggers unit tests. As we would like to always have working version of the project in the source control each developer should be sure that he checks in the working code and all tests would be passed.
For this purpose we decided to create individual databases for each developers that has following naming convention --> DbMain_XX (where XX are the developers initial). So every developer before the check-in is suppose to publish all the changes to that database manually and run the unit tests. It is useful to setup publish config for this purpose with that is the copy of the main publish config with the only difference in the database names.
That would introduce that we will have a lot of different publish profiles in the solution that is quite a mess.
If we will not add these profiles to the source control, then .sqlproj file would still have reference to these files so the project will have reference to the not existing files.
So the actual question. Can I have single publish profile for all developers where the database name will be changed using variables? For example DbName_$(dev_initials)? Or can we have that each developer would have their own publish configs only locally and it wouldn't break the project?
UPDATE:
According to the Peter Schott comments:
I can create local publish profile, but if I don't add it to the source control, then the still be an entry in sqlproj file, but the file itself will be unavailable.
Running tests locally have at least 2 disadvantages. The first one is that everybody is supposed to install SQL Server locally. We are mainly working via virtual machines and the disk space is quite limited there. Another thing is that developers will definitely forget or not will not run tests manually every time. Sometimes they will push changes to the repo without building it or/and running tests. We would like to avoid such situations and "catch" failed build as soon as possible.
Another approach that was mentioned is to have 1 common build database. And in my case we have one (DbMain). All of developers can use it for it's needs but we will definitely catch the situation when the 2 developers will publish at the same time and that can make a lot of confusion by figuring out what's really went wrong.
A common approach to this kind of thing - not only for SSDT publish profiles but for config files in general - is to commit a generic version of the file with a name something like DbMain.publish.xml.template, and provide instructions to the developer to rename the file to DbMain.publish.xml - or whatever - and .gitignore this local copy of the file, allowing the developers to make whatever changes they want, but inherit the common settings from the .template version of the file.
Publish profiles don't need to be added to the .sqlproj to be used at deploy time, this is merely a convenience in Visual Studio to make them easier to find and edit, so you don't need to worry about broken references.
You are right in wanting to avoid multiple developers publishing to a common "build" database, this is a recipe for frustration.
Really, you want the "build" database to be published to as part of your CI process, meaning after the developers have pushed their changes.
I'm doing a small pilot project trying to implement Sql Server Data Tools sqlproj projects in order to better bring our databases under source control. In my organization, we have separate no-trust domains for test environments of various purposes, so these domains of course have their own isolated active directory accounts.
The documentation is still somewhat sparse and I don't really know where to go for more information on this toolset, especially considering the extraordinary amount of churn in Visual Studio's history of database assets.
So far, the only idea I've really had would be to make separate sqlproj projects specifically for the security objects each separate domain, separate from the other schema objects. My hope is that somehow I can tie my actual database schema to those at deploy time and also to somehow switch which security project I'm using in the build. I have no idea if that's feasible though.
Has anyone that uses Visual Studio sqlproj projects had to deal with this? Is there a best practice for this kind of thing?
If you have different settings for each environment then the easiest is to either leave them out and not delete them when you deploy or to have a post deploy script that sets them up manually.
Normally for handling different configurations I would suggest using sql cmd variables (on the properties of the project there is a page for setting these up) but when you create a login you cannot use a variable to create it so that falls over!
There is an example on how to setup a post deploy wrapper for just this case:
http://schottsql.blogspot.co.uk/2013/05/ssdt-setting-different-permissions-per.html
Good luck with ssdt, there are some strange quirks but it enables so much!
Twist to the standard âSQL database change workflow best practicesâ
Background
ASP.NET/C# Web App
MS SQL
Environments
Production
UAT
Test
Dev
We create patch scripts (XML and sql) that are source controlled in Mercurial. We have cmd line utility that installs patches to DB (utitlity.exe install âpatch) from a Release folder the build packages. Patches have meta data that helps with when patch should run and we log patches installed in a table in the target DB. All these were covered in the 3 year old question:
SQL Server database change workflow best practices
Our Problem/Twist
I think this works well for tables, views, functions and stored procedures. We struggle with application configuration data. Here are some touch points on application configurations.
New client. BA performs system study and fit analysis. Out of this comes a configuration word document of what application configurations need to be setup. Note some of these may also come in phases over time. We need to get these new configurations into the system for the developer and client UAT.
Developer works on feature request or bug fix. A new configuration change comes out of that change. The configuration needs to make it into the system for testing and promotion to UAT and up.
QA finds that the developer missed an associated configuration change. That configuration needs to make it into the system for promotion to UAT and up.
Build goes to UAT. Client performs acceptance testing but find they really want to change another unassociated configuration and have it promoted with the changes. In other words they found they want to change a business process by a configuration. The configuration needs to make it into the system for promotion to PRD.
As the client operates in PRD they may tweak application settings. These configurations need to make it into the system for future development and testing.
The general issue is making sure we are accounting for all the configurations and accidently not miss any during promotions which causes grief.
Our Attempts At A Process
a. We have had member of the QA team to write patches (xml and sql) and check those in. This requires a build to make sure those get into the package. With this approach it really just took care of item 1 above and we fell apart on the other items. The nice thing is for the items that made it into the patches it was just an install with the utility.
b. A developer threw together a Config page on the application. All the configurations could be uploaded and downloaded via XML document but it requires the app to be running. For item 1, member of QA team would manually setup configurations in the application and then would download the Config.xml file. This XML file would be used to upload configurations in other environments. We would use text diff tool to look at differences between config.xml files from different environments. This addressed item 1 and the others items but had problems. Problems were not all configurations made it into the XML document (just needs to be fixed by developer), some of the configurations didnât have a UI in the application so you still had to manually go to the database on some, comparing the XML document with text diff was difficult at time (looked mostly due to sorting but Iâm sure there are other issues), XML was not very human readable and finally the XML document did not allow for deleting existing incorrect or outdated configs.
c. Recently we went with option B, but over time for a new client we just started manually tracking configs and promoting them manually by hand (UI and DB) through the promotions. Needless to say lots of human errors.
So we have been looking at solutions. Eventually it would be great to get as much automation in as possible. Iâm looking at going with the scripting approach and just focusing on process, documentation and looking at using Redgate data compare in addition to what we had been doing with compare on config.xml. With Redgate we have to create views though and there is no way to create update scripts from that approach except to manually update the scripts. It does at least allow a comparison without the app running. Iâm also looking at pulling out the configs from our normal patches and making it a system independent of the build (utility.exe âpatch âconfig). When I say focus on process it will be things like if we compare and find a config change either reported by client or not, we still script it, just means we have to have a process in place to quickly revalidate config install before promoting to the next level. As for documentation looking at making the original QA document a living document instead of just an upfront document. The goal is to try and enhance clarity and reduce missing configurations during promotion. Unfortunately it doesnât improve speed of delivery.
Does anyone have any recommendations or best practices to pass along. Thanks.
Can I ask exactly what you mean by application configuration. I'm interpreting that as both:
Config files in the web application
Static reference data inside the database
Full disclosure I work for Red Gate. You might be interested in taking a look at Deployment Manager, it's a deployment tool that deploys applications, databases and configuration. It's free for up to 5 projects and target servers.
The approach it uses is to package application code and the database state into packages. These packages can be deployed into dev, test, staging and production environments. The same package is deployed to each environment.
Any application configuration that needs to change between environments is handled in one of the ways below:
Variable substitution in web.config. The tool allows you to specify override values for variables in these files, and set these per environment/server
Substituting the web.config file per environment.
Custom powershell scripts that are run pre/post deploy. You could use these to execute custom SQL based on the environment or server.
Static data within the database, using SQL Source Control's static
data feature. I've written a blog post about how to supply
different sets of static data to different environments/customers.
This allows you to source control the application configurations and deploy them to different environments.
I'm trying to go about setting up my BIRT reports and the iServer they sit on such that the database the Data Sources connect to are determined by the environment. Our setup is that currently there is just one iServer instance and many environments running a tomcat webapp that hit it (this may be the problem...).
Essentially the ideal is that the report connects differently in these places:
Local developement, which is running a local tomcat instance of the application which talks to the iPortal/iServer. Local database, but should be able to easily change to other databases for debugging etc.
QA deploy, qa database
Production deploy, production database
I've seen two options for how to fix this:
First option is to bind the Data Source to a configuration file in resources somewhere. Problem here is that if you have only one iServer, its resources are local to the server it is on, and not where the webapp. So, if I understand it correctly, this does not provide the flexibility I'm looking for.
Second option is to pass in all the connection info as report parameters and get the application to determine the correct parameters to send in. This way the application could pull from a local configuration file. This option would work, but I'm weary of the security (or lack thereof) in passing around connection info/credentials.
Does anyone have a better option? Or have people just run local iServer instances for developement? I can see running an iServer for each environment may simplify this issue and allow the reports released to production to be updated and tested in a QA environment without disrupting production, so maybe that is the solution.
One possible approach would be to set each of the connection properties conditionally in the Property Binding section of the Edit Data Source dialog, based on the value of a hidden parameter indicating which environment is to be accessed.
An example of this approach can be found here.
You mention that you are looking for an option for development, including the possibility of a local iServer. I think this would be overkill. Do you Dev & initial testing in BIRT; you do not need an iServer to run the report. If you need resources on the iServer to run & test the report you can reference those through the Server explorer in BIRT Pro. Once you are ready to deploy, I would follow Mark's strategy above using property bindings on the data source itself. That is as close to a best practice as exists for this migration requirement as exists in BIRT.