Multiple-domain security with SSDT .sqlproj projects? - database

I'm doing a small pilot project trying to implement Sql Server Data Tools sqlproj projects in order to better bring our databases under source control. In my organization, we have separate no-trust domains for test environments of various purposes, so these domains of course have their own isolated active directory accounts.
The documentation is still somewhat sparse and I don't really know where to go for more information on this toolset, especially considering the extraordinary amount of churn in Visual Studio's history of database assets.
So far, the only idea I've really had would be to make separate sqlproj projects specifically for the security objects each separate domain, separate from the other schema objects. My hope is that somehow I can tie my actual database schema to those at deploy time and also to somehow switch which security project I'm using in the build. I have no idea if that's feasible though.
Has anyone that uses Visual Studio sqlproj projects had to deal with this? Is there a best practice for this kind of thing?

If you have different settings for each environment then the easiest is to either leave them out and not delete them when you deploy or to have a post deploy script that sets them up manually.
Normally for handling different configurations I would suggest using sql cmd variables (on the properties of the project there is a page for setting these up) but when you create a login you cannot use a variable to create it so that falls over!
There is an example on how to setup a post deploy wrapper for just this case:
http://schottsql.blogspot.co.uk/2013/05/ssdt-setting-different-permissions-per.html
Good luck with ssdt, there are some strange quirks but it enables so much!

Related

How to handle code migration when working with Snowflake?

I am trying to understand the best way to migrate code when working with Snowflake. There are two scenarios, one where we have only one snowflake account and that houses all environments (dev, test, prod). And the other has two accounts (non-prod, prod). With the second option, I was planning to create a separate script that will set up the correct database name which is based on environment (dev_ent_dw, prod_ent_dw) and then I will refer these as variables when creating objects. Example -
set env = ‘dev’;
set db = $env || ‘_ent_dw.’;
Right now we’re running everything manually so the devops team will run these upfront before running ddl scripts. We may do something similar with the former scenario but I am wondering if folks can share best practices of dealing with this as I am sure it would be common topic at large enterprises.
We have different accounts for each environment (dev, qa, prod). We use Azure DevOps for change management within our team, including the Git repos and Azure Pipelines for deploying scripts via schemachange.
We do NOT append an environment to objects, as that is handled by the account.
Developers write migration scripts and check them into source control. We then create Version folders and move/rename the migration scripts for deployment, and run a pipeline to execute the changes. The only thing we need to change is the URL to deploy against, and that is handled within the pipeline itself. The nice thing here is that we do not need to tweak anything going from different branches in source control.
We have not been doing much with creating clones for development work, as we usually only have one developer working on changes to a set of objects at a time. We are exploring ways to improve our process, but what we have works fairly well for our current needs.

Twist to the standard “SQL database change workflow best practices”

Twist to the standard “SQL database change workflow best practices”
Background
ASP.NET/C# Web App
MS SQL
Environments
Production
UAT
Test
Dev
We create patch scripts (XML and sql) that are source controlled in Mercurial. We have cmd line utility that installs patches to DB (utitlity.exe install –patch) from a Release folder the build packages. Patches have meta data that helps with when patch should run and we log patches installed in a table in the target DB. All these were covered in the 3 year old question:
SQL Server database change workflow best practices
Our Problem/Twist
I think this works well for tables, views, functions and stored procedures. We struggle with application configuration data. Here are some touch points on application configurations.
New client. BA performs system study and fit analysis. Out of this comes a configuration word document of what application configurations need to be setup. Note some of these may also come in phases over time. We need to get these new configurations into the system for the developer and client UAT.
Developer works on feature request or bug fix. A new configuration change comes out of that change. The configuration needs to make it into the system for testing and promotion to UAT and up.
QA finds that the developer missed an associated configuration change. That configuration needs to make it into the system for promotion to UAT and up.
Build goes to UAT. Client performs acceptance testing but find they really want to change another unassociated configuration and have it promoted with the changes. In other words they found they want to change a business process by a configuration. The configuration needs to make it into the system for promotion to PRD.
As the client operates in PRD they may tweak application settings. These configurations need to make it into the system for future development and testing.
The general issue is making sure we are accounting for all the configurations and accidently not miss any during promotions which causes grief.
Our Attempts At A Process
a. We have had member of the QA team to write patches (xml and sql) and check those in. This requires a build to make sure those get into the package. With this approach it really just took care of item 1 above and we fell apart on the other items. The nice thing is for the items that made it into the patches it was just an install with the utility.
b. A developer threw together a Config page on the application. All the configurations could be uploaded and downloaded via XML document but it requires the app to be running. For item 1, member of QA team would manually setup configurations in the application and then would download the Config.xml file. This XML file would be used to upload configurations in other environments. We would use text diff tool to look at differences between config.xml files from different environments. This addressed item 1 and the others items but had problems. Problems were not all configurations made it into the XML document (just needs to be fixed by developer), some of the configurations didn’t have a UI in the application so you still had to manually go to the database on some, comparing the XML document with text diff was difficult at time (looked mostly due to sorting but I’m sure there are other issues), XML was not very human readable and finally the XML document did not allow for deleting existing incorrect or outdated configs.
c. Recently we went with option B, but over time for a new client we just started manually tracking configs and promoting them manually by hand (UI and DB) through the promotions. Needless to say lots of human errors.
So we have been looking at solutions. Eventually it would be great to get as much automation in as possible. I’m looking at going with the scripting approach and just focusing on process, documentation and looking at using Redgate data compare in addition to what we had been doing with compare on config.xml. With Redgate we have to create views though and there is no way to create update scripts from that approach except to manually update the scripts. It does at least allow a comparison without the app running. I’m also looking at pulling out the configs from our normal patches and making it a system independent of the build (utility.exe –patch –config). When I say focus on process it will be things like if we compare and find a config change either reported by client or not, we still script it, just means we have to have a process in place to quickly revalidate config install before promoting to the next level. As for documentation looking at making the original QA document a living document instead of just an upfront document. The goal is to try and enhance clarity and reduce missing configurations during promotion. Unfortunately it doesn’t improve speed of delivery.
Does anyone have any recommendations or best practices to pass along. Thanks.
Can I ask exactly what you mean by application configuration. I'm interpreting that as both:
Config files in the web application
Static reference data inside the database
Full disclosure I work for Red Gate. You might be interested in taking a look at Deployment Manager, it's a deployment tool that deploys applications, databases and configuration. It's free for up to 5 projects and target servers.
The approach it uses is to package application code and the database state into packages. These packages can be deployed into dev, test, staging and production environments. The same package is deployed to each environment.
Any application configuration that needs to change between environments is handled in one of the ways below:
Variable substitution in web.config. The tool allows you to specify override values for variables in these files, and set these per environment/server
Substituting the web.config file per environment.
Custom powershell scripts that are run pre/post deploy. You could use these to execute custom SQL based on the environment or server.
Static data within the database, using SQL Source Control's static
data feature. I've written a blog post about how to supply
different sets of static data to different environments/customers.
This allows you to source control the application configurations and deploy them to different environments.

Auto-deploy Zend Framework 2 application + Database schema + Actual data

Background:
I am using GitHub to store a ZF2 application.
The database schema + the actual data stored inside the schema are not being stored inside a version control. At the moment I am in development mode, so I have some database dump scripts that I load into the database when I need to. I also tweak entries in the database via phpMyAdmin when I need ongoing granular control for immediate testing purposes. I am also looking into using Doctrire ORM, so my schema will be part of my code via Annotations, and that will be checked into GitHub. Doctrine ORM will generate the actual schema for me, although it is still a separate step in the deployment process. The actual data however, will still be outside of the application and outside of the repository and currently has to be dealt with separately and is not automated.
Goal:
I want to be able to deploy ZF2 application and the database schema, and the data onto Zend Server and have it "just work" in the most automated, least manual way possible.
Question:
What is a recommended, best practice way to deploy every aspect of ZF2 application in the most automated, least manual way possible and have it "just work"? Let's focus on the Development and Testing mode here, as in Production it may be good to have separate deployment steps to protect against accidental live data overwrites.
You can try Phing (http://www.phing.info/) for deploying your PHP application, adjusting directory permissions, running database migrations, running unit tests, etc. I used Phing in couple of my projects with great success.

Committing Stored Procedures to SVN Repository

My current development environment for C# projects is Visual Studio, with a SQL Server database and using VisualSVN to connect to my SVN repository. To manage revisions of my Stored Proceduress, Views, etc I save the ALTER script to a folder watched by my SVN client so these get included in the repository.
I have checked out some (now older) posts like this one (How to keep Stored Procedures and other scripts in SVN/Other repository? and Is there a SVN plugin for SQL Server Management Studio 2005 or 2008?) and have seen a recommendation for these tools: http://www.red-gate.com/products/sql-development/sql-source-control/ and http://www.zeusedit.com/agent/ssms/ms_ssms.html .
As I infrequently work with projects doing much DB-side programming, this has never been a major bother (a dozen scripts in a folder with some naming scheme is not much to manage manually), but I have just inherited a project with a few hundred views and 1000+ Stored Procedures which have never been included in version control.
My question is:
What process do others follow for managing the versioning of their SQL Server code - is there a an accepted, clever or otherwise obvious approach I am missing here? I am leaning currently towards the purchase of one of the aforementioned tools - but am looking for advice from the community before I do this.
I realize this may result in a tool recommendation rather than a code solution but posted to SO as I think this is the appropriate crowd to ask this of.
I would recommend you go with something like the redgate tool, and treat any SQL database in the same way you'd treat your C# source code; manually keeping track of the ALTER statements will trip you up sonner or later as the number of modifications grow..can't speak for the zeus edit tool but having used the redgate one, it "just works" - and another benefit of using a tool like this is that it can manage your migration scripts so you can make a bunch of changes on your development version, then generate a single update script to update your testing database, etc,including data changes which is imho the biggest PITA to manually manage.
The other thing to consider, even if the number of changes are infrequent and you get away with manually tracking the ALTER statements, what if someone else ends up working on the same project; now you have another potential for mismanaged change scripts....
Anyway, do let us know how you get on and best of luck with it!
I’ve been maintaining a database with around 800+ db objects in it. We've always just scripted the database objects to a svn-watched folder as you describe. We have had some issues with this method, mostly with people forgetting to script new or modified objects. At the end of the day it hasn't been a huge problem for our project, but yours may be different.
We’ve looked into a couple tools, but they always assume you are starting from scratch, and we have almost 10 years of history we’d like to preserve. In the end we just end up settling back into our text-based manual solution. It's cheap and easy.
Another option you might want to look into is setting up a Visual Studio Database Project. It will script all your objects and provide some deployment options as well. My opinion was that it tired to be a little too tightly integrated for our tastes - we have a few named references to linked databases that it just wouldn't give up on.

Incremental development with subsonic

I'm in the process of starting up a web site project. My plan is to roll out the site in a somewhat rudimentary form first and then add to the site functionality along the way.
I'm using Subsonic 3 for my DAL, and I'm expecting the database will go through multiple versions as the sites evolve. This means I'll need some kind of versioning and migration tools. I understand that Subsonic has built in migration possibilities, but I'm having difficulties grasping how to use these tools, in my scenario.
First there's the SimpleRepository model, where the Subsonic "automagically" handles the migrations as i develop my site. I can see how this works on my dev-machine, but I'm not sure how to handle deployments with this.
Would Subsonic run the necessary migrations on my live site as the appropriate methods are called?
Is there some way I can force all necessary migrations on a site while taking the site offline, when using the Simplerepository model? (Else I would expect random users to experience severe performance cuts, as the migration routines kick in)
Would I be better off using the ActiveRecord model, and then handling migrations with the Subsonic.Schema.Migrator? (I suspect so)
Do you know of any good resources explaining how to handle this situation with the migrator? (I read the doc, but I can't piece together how I would use this in practice)
Thanks for listening/replying.
Regards
Jesper Hauge
I would advise against ever running migrations against a live site. SubSonic's migrations are really there to make development simpler and should never be used against a live environment. To be honest even using SubSonic.Schema.Migrator you're still going to bump into the fact that refactoring databases is an incredibly hard problem. For example renaming a column in a table using management studio is trivial, but what happens in the background involves creating an entirely new table and migrating all the constraints, data etc. before renaming the new table.
The most effective way I've found for dealing with this is:
Script all database changes as you make them in your development environment (SQL Server Management Studio will do this for you) and add these scripts to your source control.
As part of deployment (obviously backup first) run the migration scripts and then deploy the updated application on success.
Whether you use ActiveRecord or SimpleRepository is then down to whether you want the extra features/complexity of ActiveRecord.
Hope this helps
i would use activerecord easy to use and any changes you just run the TT files, you would then just build or publish your slution and done ???? SVN will keep your multiple versions of the build stage so if you make a tit of it you just drop back a revision.

Resources