We are very new to Azure, we have a large existing website (multiple instances with different customers) with an associated windows service and SQL Server 2008 database.
We are in the process of migrating the website in development to Azure. We are creating an Azure worker role to wrap the code that the Windows service executes to run every hour but my main concern is deployment, in an ideal world we will automate the deployment but for now we want to get to the following point where we can build a single deployment package for a customer which runs the database migration scripts to the Azure SQL database and migrates the web role and site and during this time a holding page is displayed to let users know the site is currently being updated. Also it would be great if there was a way to rollback if something goes wrong.
Despite my research I cannot find the answer to the process above, everything seems to involve deploying the database changes (this will include custom scripts for data migration/changes) and then publishing the site to Azure from visual studio but as we want to deploy this multiple times it would be ideal to build a package that we run against each customer site/database when we/they are ready to migrate to the next version as this may not always happen at the same time.
Our current deployment strategy is a mess, we stop the application in IIS and then start another one which shows a holding page saying the site is being updated. We then stop the windows service and manually copy the latest version overwriting the existing website and windows service. We then run a SQL script which has any changes in on the database and then we restart the windows service and IIS application.
This is not sustainable but in my research I have not come across the best option moving forward. I have looked at MSBuild but it is completely new to me and finding somewhere to start with such unfamiliar technology is proving hard. I also looked at FluentMigrator and considered running our database changes from application_start but I am not sure I am comfortable with that.
As we are looking to move to Azure where we will have the site running with 5 different configurations at the moment we are looking at publishing profiles and the hosted build controller but I cannot figure out how to deploy the database with these changes.
I would appreciate some insight into how others handle deployments like this taking into account that we are using all the latest tech, VS2013, VSO, Azure, etc. and also that we need to be able to change the configurations for different customers and I am assuming publishing profiles is the best way to do that but I may be wrong.
Related
I,
We are currently working on a .net core project that will use multiple databases with the same structure.
In short, this is a multi tenant project and each tenant will use the same web application (multiple instances behind a load balancer) BUT each tenant will have its own database.
We are searching for the best solution to ease our deployment process.
When the application (and DB) is updated, we need to update the database structure on all SQL servers and all databases (one SQL can contain x databases).
FYI, application and SQL server are hosted on AWS, our CI/CD is Azure DevOps.
And last (but not least) limitation: we are working on VSCode only (MAC & Linux laptop).
So, we looked for some solutions :
Using Database projects (.sqlproj) + DACPAC generation deployed using DevOps, but it's not available on VSCode
Using Migration: not working with multiple databases and dynamic connection strings
Using SQL script: too complicated to maintains by hand a SQL script that takes care of possible cases
So could someone give us some advice to solve this problem?
The general solution here is to generate SQL Scripts for each deployment, and integrate those into your CI/CD process.
You could use EF Migrations to generate a SQL Script, that is then tested, deployed to your repo as a first-class asset, and deployed by your CI/CD pipeline. Or you could use SSDT to manage the schema and generate change scripts. But those aren't the only reasonable ways.
If you are modifying the schema by hand without using SSDT, you would normally just use a tool to generate the change script. And go from there.
There are many tools (including SSDT) that help you to diff a development environment against a target production schema and generate the change scripts. Eg Redgate ReadyRoll
Note that if you intend to perform online schema updates you need to review the change scripts manually for offline DDL operations, and to ensure that your code/database changes have the correct forward and backward compatibility to support a rollout while the application is online.
And preparing, reviewing, testing, and editing the database change scripts is not something that everyone on the team dev needs to do. So you can always consider jumping onto a Windows VM for that task.
It is not clear to me how I should use the new features of SSIS in SQL Server 2012/2014 in an enterprise environment. Specifically, I am referring to the project deployment model, project parameters, environments, etc. We use a three-tier environment workflow; developing in development, testing and staging in QA, and production in production. The developers only have access to the development environment. The DBA’s migrate code to the other environments. All source is kept in TFS.
What is the intended workflow using these new features? If a developer develops the project/package, does the developer deploy the project to the SSISDB or does the developer stop after checking in the source? Where does the DBA come into the picture? Which environment contains SSISDB? How does the project/package get deployed to the other environments?
There seems to be many “how-to’s” published on the Internet, but I am struggling to find one that deals with the business workflow best practices. Can anyone suggest a link to an article on this subject?
Thanks.
What is the intended workflow using these new features?
It is up to the enterprise to determine how they will use them.
If a developer develops the project/package, does the developer deploy the project to the SSISDB or does the developer stop after checking in the source?
Where does the DBA come into the picture? Which environment contains SSISDB? How does the project/package get deployed to the other environments?
It really does depend. I advocate that developers have sysadmin rights in the development tier of servers. If they break it, they fix it (or if they've really pooched it, we re-image the server). In that scenario, they develop the implementation process and use deployments to Development to simulate the actions the DBAs will take when deploying to all the other pre-production and production environments. This generally satisfies your favorite regulatory standard (SOX/SAS70/HIPPA/CPI/etc) as those creating the work are not the same ones that install it.
What is the deliverable unit of work for SSIS packages using the project deployment model? It is an .ispac file. That is a self contained zip file with a manifest, project level parameters, project level connection managers and the SSIS packages.
How you generate that is up to you. Maybe you check the ispac in and that is what is deployed to your environments. Maybe the DBAs open the solution from source control and build their own ispac. Maybe you have Continuous Integration, CI, running and you click a button and some automated process generates and deploys the ispac.
That's 1/3 of the equation. From the SSISDB side, you likely want to create an Environment and populate it with variable values. Things like Connection Strings and file paths and user names & passwords. When you start creating those things, CLICK THE CREATE SCRIPT TO NEW WINDOW button! Otherwise, you're going to have to re-enter all that data when you lift to a new environment. I would expect your developers to check those scripts into source control. For passwords, blank out the value and make notes in your deployment checklist that they need to fix that before mashing F5.
You also need SQL Scripts to create the structure (folder) within the SSISDB for the project to be deployed into. Once deployed, you'll want to apply the Environment values, created in the preceding step, to the newly deployed project. Save those out as well.
I would have each environment contain an SSISDB. I don't want a missed configuration allowing a process in the production tier to reach across to the development tier and pull data. I've seen that, it's not pretty. When code is deployed to the QA/Stage tier, we find out quickly whether we missed a connection string somewhere because the dev servers reject the connection from QA. This means our SQL Instances don't all run under the same server account. Each tier gets their own account: domain\SQLServer_DEV, domain\SQLServer_QA, domain\SQLServer_PROD Do what you can to prevent yourself from having a bad day. If you go with a single/shared SSISDB across all your tiers, it can work, but you're going to have to invest a lot more energy ensuring that packages always run with the correct configuration environment applied lest bad things happen.
I have a Web application that I usually deployed using Web Deploy directly from Visual Studio (whatever branch I am currently using in VS - normally master). But now I'm introducing a second web app on Azure that will be built from the same repo but different branch. To make things simpler I will be configuring both Web apps on Azure to integrate directly with GitHub and associate them with specific branch.
I also added two additional web.config files: Web.Primary.config and Web.Secondary.config and configured app settings on Azure portal of each web app by adding additional value SCM_BUILD_ARGS and set them to
SCM_BUILD_ARGS=-p:PublishProfile=Primary // in primary web app
SCM_BUILD_ARGS=-p:PublishProfile=Secondary // in secondary web app
which I understand will transform correct config file with specific external services' configurations (DB connection, mail server, etc.).
Now the additional step that I would like to include in continuous deployment is run a set of SQL scripts that I have in my repo that I used to manually upgrade database during Web Deploy in VS. Individual scripts are actually doing specific database upgrade steps:
backup current tables - backup creates a set of Backup_OriginalTableName tables that are copied from existing ones and populated with existing data
drop whole DB model - all non-backup objects are being dropped from procedures, functions, types, views, tables...
create model - creates all tables, views and indices
create user types
create user functions
create stored procedures
restore data to new tables from backup tables - this step may occasionally break if we introduce new non-nullable columns to tables in the new model don't have defaults defined on them; I will somehow have to mitigate this problem by adding an additional script that will add missing columns to backup tables and give them some defaults, but that's a completely different issue.
I used to also have a set of batch files (BAT) in my VS solution that simply executed sqlcmd against specific database instance and executed these scripts in predefined order (as above). Hence I had batches:
Recreate Local.bat - this one used additional SQL scripts to not restore from backup but rather to recreate an empty DB with only lookup tables being populated and some default data for development purposes (like predefined test users)
Restore Local.bat - I used this script to simply restore database from backup tables discarding any invalid data I may have created while debugging/testing since last DB recreate/upgrade/restore
Upgrade Local.bat - upgrade local development DB executing scripts mentioned above
Upgrade Production.bat - upgrade production DB on Azure executing scripts mentioned above
So to support the whole deployment process I was now doing manually in VS I would now like to also execute these scripts against specific Azure SQL DB during continuous deployment. I suppose I should be running these right after code deployment because if that one fails, DB shouldn't be upgraded either.
I'm a bit confused where and how to do this? Can I configure this somewhere in Azure portal? I was looking for resources on the Web but I can't seem to find any relevant information how to do additional deployment steps to execute these scripts. I think this is some everyday scenario as it's hard to think of web apps not requiring databases these days.
Maybe it's just my process that is wrong for DB upgrade/deployment so let me also know if there is any other normal way that does DB upgrade/migration with continuous deployment on Azure... I may change my process to accommodate for this.
Note 1: I'm not using Entity Framework or any other full blown ORM. I'm rather using NPoco and all my DB logic is built in SPs that DAL is using.
Note 2: I'm aware of recently introduced staging capabilities of Azure, but my apps are on cheaper plan that doesn't support staging and I want to keep it this way as I may be introducing additional web apps along the way that will be using additional code branches and resources (DB, mail etc.)
It sounds to me like your db project is a good candidate for SSDT and inclusion in source control. You can create a MyDB.sqlproj that builds your db as a dacpac, and then you can use SqlPackage.exe Publish to accomplish your deployment to Azure.
We recently brought our databases under source control and follow a similar process to build and automatically deploy them (but not to a SQL Azure DB). We've found the source control, SSDT tooling support, and automated deployment options to be worth the effort of setting up and maintaining our project this way.
This SO question has some good notes for Azure deployment of a dacpac using SSDT:
How to publish DACPAC file to a SQL Server database project via SQLPackage.exe of SSDT?
I am setting up two dev environments (one will be on my local server, another on a cloud service provider where I will do work which may require more memory than on my local server).
What can I do to ensure that both environments are always fully in sync? I am looking at deploying apps centrally and using a tool to sync SQL Server databases, and another tool for keeping Sharepoint servers in sync, between two VMs on the two environments which are like-for-like. Is there anything else which would help to achieve this?
Thanks
This is a very tricky problem for SharePoint development.
As far as SQL server goes (for non-SharePoint stuff), you can just sync your databases using the tools provided in SQL Server. The copy database wizard for example or you could even write your own SSIS package if you need custom work to be done.
SharePoint is a different matter though. You cannot just sync site collections / web applications from one server to another by just copying the databases across, it won't work (for many reasons, but mostly because when you create a web application on a server, it creates the databases using a GUID as the application ID. That GUID is used everywhere in the database and all the links between tables will be broken if you try and change it). The structure of the SharePoint database is not documented and MS recommends against modifying it manually. And honestly, even if you did manage to sync your databases right from SQL server, you would run into other problems because not all customizations you do are saved into the database (a lot of stuff goes into the 12 hive).
So it comes down to what you are trying to achieve.
If you are trying to sync customizations (i.e. your Content types, list templates, web parts, etc.) that were coded. I would recommend that you just build WSP packages from your development environment and that you deploy them everytime you need to sync.
If you are only trying to sync data (i.e. list items) you can use the backup / restore solution (you'll find it in Central Administration). Note that it isn't overly reliable if you have customizations though. It works fine on out of the box sites but it can be tricky to restore once you use your own list templates, etc.
You can also write code to sync using the web services or Content Deployment API and see if it suits your needs.
You can also look into tools that will do all or a part of the work for you. Here is one
So basically, no matter how you decide to do it, it won't be as simple as you expected it to be. The DEV / TEST / PROD environment sync problem is classic for SharePoint development.
I work on a highly customized SharePoint web app and the best solution we found was to :
Be very disciplined in our code : do all your customizations through code and build WSP packages with that code. No SharePoint designer. Once you customize a page with SharePoint designer you can't sync anything.
Sync the lists between any servers using the web services
We have a great process for upgrading our clients' websites as far as updating html/js code and assets is concerned (by using Subversion) that we are very happy with.
However, when it comes to upgrading databases, we are without any formal process.
If we add new tables/fields to our development database, when it comes to rolling it out to the production server we have to remember our changes and replicate them. We cannot simply copy the development database on top of the production database as client data would be lost (e.g. blog posts, account info etc).
We are also now in the process of building a web-app which is going to come across the same issues.
Does anyone have a solution that makes this process easier and less prone to error? How do big web-apps get round the problem?
Thanks.
I think that adding controls to the development process is paramount. At one of my past jobs, we had to script out all database changes. These scripts were then passed to the DBA with instructions on what environment to deploy them in. At the end of the day, you can implement technical solutions, but if the project is properly documented (IF!!!) then when it comes time for deployment, the developers should remember to migrate scripts, along with code files. My $.02
In my opinion your code should always be able to create your database from scratch, therefore it should also handle upgrades too. It should check a field in the database to see what version the schema is at and handle the upgrades to the latest version.
I had some good luck with: http://anantgarg.com/2009/04/22/bulletproof-subversion-web-workflow/
The author has a database versioning workflow (with PHP script), which is decent.
Some frameworks have tools which deal with the database upgrade. For example rails migrations are pretty nice.
If no convenient tool is available for your platform you could try scripting modifications to your development database.
In my company we use this model for some of our largest projects:
If the X is the just deployed version of our application and it's not different then the latest development version.
We create a new directory for the scripts naming it for example - version x + 1 and add it to the subversion repository.
When developer wants to make modification to the development database, he creates the .sql script with a name "1 - does something.sql" that makes the modifications (they must be indestructible), saves it and then runs it on the development database. He commits the web app code and the sql scripts. Each developer does the same and maintains the order of the execution of scripts.
When we need to deploy the version X+1 - we copy the x+1 web app code and the scripts to the production server, we backup the database, run the sql scripts one by one on the production database and deploy the new web application code.
After that we open a new (x + 2) sql script directory and repeat the proces ...
We basically have a similar approach as Senad, we maintain a changes.sql file in our repo that developers put their changes in. When we deploy to production, we:
Run a test deployment to the QA server:
first reproduce the production environment (app & db) in the QA server
run changes.sql against the qa db
deploy the app to qa
run integration tests.
When we are sure the app runs fine in qa with the scripted changes to the db (ie. nobody forgot to include their db changes in the changes.sql, or references, etc.) we:
backup the production database
run the scripts in the changes.sql file against the production db
deploy the app
clear the changes.sql file
All the deployment is run through automated scripts so we now we can reproduce it.
Hope this help
We have folder migrations/ inside almost every project and tehere are so called, "up" and "down" scripts (sql). Every developer is obliged to write his own up/down script and to verify it against testing environment.
There are other tools and frameworks for migrations, but we haven't got the time to test it...
Some are: DoctrineDB, rails migrations, propel (I think...), capistrano can do it also..