SSIS 2012 Workflow Best Practices - sql-server

It is not clear to me how I should use the new features of SSIS in SQL Server 2012/2014 in an enterprise environment. Specifically, I am referring to the project deployment model, project parameters, environments, etc. We use a three-tier environment workflow; developing in development, testing and staging in QA, and production in production. The developers only have access to the development environment. The DBA’s migrate code to the other environments. All source is kept in TFS.
What is the intended workflow using these new features? If a developer develops the project/package, does the developer deploy the project to the SSISDB or does the developer stop after checking in the source? Where does the DBA come into the picture? Which environment contains SSISDB? How does the project/package get deployed to the other environments?
There seems to be many “how-to’s” published on the Internet, but I am struggling to find one that deals with the business workflow best practices. Can anyone suggest a link to an article on this subject?
Thanks.

What is the intended workflow using these new features?
It is up to the enterprise to determine how they will use them.
If a developer develops the project/package, does the developer deploy the project to the SSISDB or does the developer stop after checking in the source?
Where does the DBA come into the picture? Which environment contains SSISDB? How does the project/package get deployed to the other environments?
It really does depend. I advocate that developers have sysadmin rights in the development tier of servers. If they break it, they fix it (or if they've really pooched it, we re-image the server). In that scenario, they develop the implementation process and use deployments to Development to simulate the actions the DBAs will take when deploying to all the other pre-production and production environments. This generally satisfies your favorite regulatory standard (SOX/SAS70/HIPPA/CPI/etc) as those creating the work are not the same ones that install it.
What is the deliverable unit of work for SSIS packages using the project deployment model? It is an .ispac file. That is a self contained zip file with a manifest, project level parameters, project level connection managers and the SSIS packages.
How you generate that is up to you. Maybe you check the ispac in and that is what is deployed to your environments. Maybe the DBAs open the solution from source control and build their own ispac. Maybe you have Continuous Integration, CI, running and you click a button and some automated process generates and deploys the ispac.
That's 1/3 of the equation. From the SSISDB side, you likely want to create an Environment and populate it with variable values. Things like Connection Strings and file paths and user names & passwords. When you start creating those things, CLICK THE CREATE SCRIPT TO NEW WINDOW button! Otherwise, you're going to have to re-enter all that data when you lift to a new environment. I would expect your developers to check those scripts into source control. For passwords, blank out the value and make notes in your deployment checklist that they need to fix that before mashing F5.
You also need SQL Scripts to create the structure (folder) within the SSISDB for the project to be deployed into. Once deployed, you'll want to apply the Environment values, created in the preceding step, to the newly deployed project. Save those out as well.
I would have each environment contain an SSISDB. I don't want a missed configuration allowing a process in the production tier to reach across to the development tier and pull data. I've seen that, it's not pretty. When code is deployed to the QA/Stage tier, we find out quickly whether we missed a connection string somewhere because the dev servers reject the connection from QA. This means our SQL Instances don't all run under the same server account. Each tier gets their own account: domain\SQLServer_DEV, domain\SQLServer_QA, domain\SQLServer_PROD Do what you can to prevent yourself from having a bad day. If you go with a single/shared SSISDB across all your tiers, it can work, but you're going to have to invest a lot more energy ensuring that packages always run with the correct configuration environment applied lest bad things happen.

Related

Deploy SQL server database on multiple server using DevOps (an mac as development laptop)

I,
We are currently working on a .net core project that will use multiple databases with the same structure.
In short, this is a multi tenant project and each tenant will use the same web application (multiple instances behind a load balancer) BUT each tenant will have its own database.
We are searching for the best solution to ease our deployment process.
When the application (and DB) is updated, we need to update the database structure on all SQL servers and all databases (one SQL can contain x databases).
FYI, application and SQL server are hosted on AWS, our CI/CD is Azure DevOps.
And last (but not least) limitation: we are working on VSCode only (MAC & Linux laptop).
So, we looked for some solutions :
Using Database projects (.sqlproj) + DACPAC generation deployed using DevOps, but it's not available on VSCode
Using Migration: not working with multiple databases and dynamic connection strings
Using SQL script: too complicated to maintains by hand a SQL script that takes care of possible cases
So could someone give us some advice to solve this problem?
The general solution here is to generate SQL Scripts for each deployment, and integrate those into your CI/CD process.
You could use EF Migrations to generate a SQL Script, that is then tested, deployed to your repo as a first-class asset, and deployed by your CI/CD pipeline. Or you could use SSDT to manage the schema and generate change scripts. But those aren't the only reasonable ways.
If you are modifying the schema by hand without using SSDT, you would normally just use a tool to generate the change script. And go from there.
There are many tools (including SSDT) that help you to diff a development environment against a target production schema and generate the change scripts. Eg Redgate ReadyRoll
Note that if you intend to perform online schema updates you need to review the change scripts manually for offline DDL operations, and to ensure that your code/database changes have the correct forward and backward compatibility to support a rollout while the application is online.
And preparing, reviewing, testing, and editing the database change scripts is not something that everyone on the team dev needs to do. So you can always consider jumping onto a Windows VM for that task.

Best way to manage SQL Server on developer, test, staging, and production environments through Visual Studio 2013

I've read an article on MS Blog and on here on stackoverflow and this article
They do shed some light on my scenario, but I feel I may be missing something...
The third article above nicely explains a possible way to deploy database versions including schema and data... but is oriented to deploying to production.
I am looking to streamline deploying DBProj's to Developer DB instances , test, staging and production.(ALL are SQL2012 Std. Edition)
on the developer instances, they may be a few versions off... we have contractors who leave and it may be a couple of dev cycles before a new contractor tries to deploy.
Also, how do you get the schema on the target to clean up itself? I know we can turn off the restrictions to remove schema objects, but on the developer workstation instances the logins are different that other environments and we do not want those deleted!!! The second article has some clues to this but does not work when I tried it. We have one application role across all environments and depending on the environment the right login is placed in there.
I have a sense I may have to propose changing the our schema, which may not fly well the the other leads.
I would appreciate hearing from anyone who has a tried and true process in place that can cover seamless deployment to the 4 environments described above.
Thanks!
You might be interested in Deployment Manager and SQL Source Control from Red Gate (full disclosure - I work for Red Gate).
The approach these two products use for keeping development environments in sync is:
Developers edit a local database to make their changes (or all edit a shared DB across they whole team)
Developers can then synchronized the database to an existing source control repository (e.g. SVN/Git/TFS) using SQL Source Control
Other team members can update their databases from the repository, and changes are applied to their local database.
Deployment Manager works with a CI server to allow the automated deployment of any version of the database to a set of predefined environments. For example you might want an automated deployment to an integration environment after every commit. Deployments out to test/staging/production environments are then push button deployments when required.
Under the hood it's the Red Gate SQL Compare comparison technology to compare the versioned database state to the target database state. This means that any development database can be updated to the latest state, even if it is much older than the head revision, or a new member joining the team.
You can include filters within the packages/repository which will exclude certain objects (for examples users, roles, keys, specific schemas). This means that you can deploy the same version/package to each environment, and it won't interfere with these objects.
My colleague has just written a great intro blog post with some videos if you're interested in more info.

Database development and deployment methods and tools

My team develop a web application using ASP.NET. The application is very much based on database (We use SQL Server). Most features require database development, in addition to server and client side code. We use GIT as our source code management system.
In order to check in (and later deploy) the database changes, we create SQL scripts and check them in. Our installer knows to run them and this is how we deploy these changes. Using scripts is very uncomfortable to merge changes (for example, if two developers added a column to the same table).
So my question is what other method or tool can you suggest? I know Visual Studio has a database project which may be useful, I still haven't learned about it, I wonder if there are other options out there before I start learning about it.
Thanks!
I think, you have to add in worlflow and use Liquibase from the first steps of database development (check Liquibase Quick-Start, where changelog started from creating initial structures).
From developers POV adding Liquibase means appearing of additional XML-file(s) in source-tree, when database-schema have to be changed in some changeset
First full disclosure that I work for Red Gate who make this product...
You might be interested in taking a look at SQL Source Control. It's a plugin for SSMS that connects your development database to your existing version control system, git in your case.
When a developer makes a change to a dev database (either in a dedicated local copy, or in a shared dev database) then this change is detected and can then be committed to the repository. Other developers can pick up this change, and you can propagate it up to other environments.
When it comes to deployment you can then use the SQL Compare tool to deploy from a specific revision in your repository that you check out.
It's really handy in cases like your example with two developers making a change to the same table. Either the 2nd developer can pick up the change from the version control system before they commit their change. Or you can use the branching/merging features of git to track these in separate branches and deploy them as separate changes. There's scope to couple this into CI systems too.
Some specifics on running SQL Source Control with git:
http://datachomp.com/archives/git-and-red-gate-sql-source-control/
And a link to a more general set-up guide
http://www.troyhunt.com/2010/07/rocking-your-sql-source-control-world.html

managing Sql Server databases version control in large teams

For the last few years I was the only developer that handled the databases we created for our web projects. That meant that I got full control of version management. I can't keep up with doing all the database work anymore and I want to bring some other developers into the cycle.
We use Tortoise SVN and store all repositories on a dedicated server in-house. Some clients require us not to have their real data on our office servers so we only keep scripts that can generate the structure of their database along with scripts to create useful fake data. Other times our clients want us to have their most up to date information on our development machines.
So what workflow do larger development teams use to handle version management and sharing of databases. Most developers prefer to deploy the database to an instance of Sql Server on their development machine. Should we
Keep the scripts for each database in SVN and make developers export new scripts if they make even minor changes
Detach databases after changes have been made and commit MDF file to SVN
Put all development copies on a server on the in-house network and force developers to connect via remote desktop to make modifications
Some other option I haven't thought of
Never have an MDF file in the development source tree. MDFs are a result of deploying an application, not part of the application sources. Thinking at the database in terms of development source is a short-cut to hell.
All the development deliverables should be scripts that deploy or upgrade the database. Any change, no matter how small, takes the form of a script. Some recommend using diff tools, but I think they are a rat hole. I champion version the database metadata and having scripts to upgrade from version N to version N+1. At deployment the application can check the current deployed version, and it then runs all the upgrade scripts that bring the version to current. There is no script to deploy straight the current version, a new deployment deploys first v0 of the database, it then goes through all version upgrades, including dropping object that are no longer used. While this may sound a bit extreme, this is exactly how SQL Server itself keeps track of the various changes occurring in the database between releases.
As simple text scripts, all the database upgrade scripts are stored in version control just like any other sources, with tracking of changes, diff-ing and check-in reviews.
For a more detailed discussion and some examples, see Version Control and your Database.
Option (1). Each developer can have their own up to date local copy of the DB. (Up to date meaning, recreated from latest version controlled scripts (base + incremental changes + base data + run data). In order to make this work you should have the ability to 'one-click' deploy any database locally.
You really cannot go wrong with a tool like Visual Studio Database Edition. This is a version of VS that manages database schemas and much more, including deployments (updates) to target server(s).
VSDE integrates with TFS so all your database schema is under TFS version control. This becomes the "source of truth" for your schema management.
Typically developers will work against a local development database, and keep its schema up to date by synchronizing it with the schema in the VSDE project. Then, when the developer is satisfied with his/her changes, they are checked into TFS, and a build and then deployment can be done.
VSDE also supports refactoring, schema compares, data compares, test data generation and more. It's a great tool, and we use it to manage our schemas.
In a previous company (which used Agile in monthly iterations), .sql files were checked into version control, and (an optional) part of the full build process was to rebuild the database from production then apply each .sql file in order.
At the end of the iteration, the .sql instructions were merged into the script that creates the production build of the database, and the script files moved out. So you're only applying updates from the current iteration, not going back til the beginning of the project.
Have you looked at a product called DB Ghost? I have not personally used it but it looks comprehensive and may offer an alternative as part point 4 in your question.

What is the best website/web-app upgrade process?

We have a great process for upgrading our clients' websites as far as updating html/js code and assets is concerned (by using Subversion) that we are very happy with.
However, when it comes to upgrading databases, we are without any formal process.
If we add new tables/fields to our development database, when it comes to rolling it out to the production server we have to remember our changes and replicate them. We cannot simply copy the development database on top of the production database as client data would be lost (e.g. blog posts, account info etc).
We are also now in the process of building a web-app which is going to come across the same issues.
Does anyone have a solution that makes this process easier and less prone to error? How do big web-apps get round the problem?
Thanks.
I think that adding controls to the development process is paramount. At one of my past jobs, we had to script out all database changes. These scripts were then passed to the DBA with instructions on what environment to deploy them in. At the end of the day, you can implement technical solutions, but if the project is properly documented (IF!!!) then when it comes time for deployment, the developers should remember to migrate scripts, along with code files. My $.02
In my opinion your code should always be able to create your database from scratch, therefore it should also handle upgrades too. It should check a field in the database to see what version the schema is at and handle the upgrades to the latest version.
I had some good luck with: http://anantgarg.com/2009/04/22/bulletproof-subversion-web-workflow/
The author has a database versioning workflow (with PHP script), which is decent.
Some frameworks have tools which deal with the database upgrade. For example rails migrations are pretty nice.
If no convenient tool is available for your platform you could try scripting modifications to your development database.
In my company we use this model for some of our largest projects:
If the X is the just deployed version of our application and it's not different then the latest development version.
We create a new directory for the scripts naming it for example - version x + 1 and add it to the subversion repository.
When developer wants to make modification to the development database, he creates the .sql script with a name "1 - does something.sql" that makes the modifications (they must be indestructible), saves it and then runs it on the development database. He commits the web app code and the sql scripts. Each developer does the same and maintains the order of the execution of scripts.
When we need to deploy the version X+1 - we copy the x+1 web app code and the scripts to the production server, we backup the database, run the sql scripts one by one on the production database and deploy the new web application code.
After that we open a new (x + 2) sql script directory and repeat the proces ...
We basically have a similar approach as Senad, we maintain a changes.sql file in our repo that developers put their changes in. When we deploy to production, we:
Run a test deployment to the QA server:
first reproduce the production environment (app & db) in the QA server
run changes.sql against the qa db
deploy the app to qa
run integration tests.
When we are sure the app runs fine in qa with the scripted changes to the db (ie. nobody forgot to include their db changes in the changes.sql, or references, etc.) we:
backup the production database
run the scripts in the changes.sql file against the production db
deploy the app
clear the changes.sql file
All the deployment is run through automated scripts so we now we can reproduce it.
Hope this help
We have folder migrations/ inside almost every project and tehere are so called, "up" and "down" scripts (sql). Every developer is obliged to write his own up/down script and to verify it against testing environment.
There are other tools and frameworks for migrations, but we haven't got the time to test it...
Some are: DoctrineDB, rails migrations, propel (I think...), capistrano can do it also..

Resources