What's the best way to automatically deploy changes to a database driven web application? Is there a single product out there that can modify the following...
Website (dlls, aspx, css files etc)
Database Schema (add tables, columns, etc)
Database data (modify table contents)
Reporting Services reports
I've seen various separate products, but not one that does everything.
Yes, there is a Powershell method posted at http://www.codeproject.com/KB/install/DeploySite.aspx
note: in addition, for the schema stuff you might need to upload a schema.version file and then have a process up on that server detect a new schema file was uploaded and apply it. for new database rows you could maybe do something similar. another idea is that you could run the SQL database as a webservice and talk to it direct with your powershell script.
Related
Can you create a generalized deployment script from a Sql Server Db Project in VS 2015 that doesn't require a schema compare / publish against a specific target database?
Some background:
We are using Sql Server Database projects to manage our database schema. Primarily we are using the projects to generate dacpacs that get pushed out to our development environments. They also get used for brand new installations of our product. Recently we have developed an add-on to our product and have created a new db project for it, referencing our core project. For new installations of our product where clients want the add-on, our new project will be deployed.
The problem we are having is that we need to be able to generate a "generic" upgrade script. Most of our existing installations were not generated via these projects and all contain many "custom" stored procedures/etc specific to that client's installation. I am looking for a way to generate a script that would do an "If Not Exists/Create + Alter" without needing to specify the target database.
Our add-on project only contains stored procedures and a couple tables, all of which will be new to any client opting for this add-on. I need to avoid dropping items not in the project while being able to deploy all of our new "stuff". I've found the option to Include Composite Objects which I can uncheck so that the deployment is specific to our add-on, but publishing still requires me to specify a target database so that a schema compare can be performed and I get scripts that are specific to that particular database. I've played with pretty much every option and cannot find a solution.
Bottom Line: Is there a way for me to generate a generic script that I can give to my deployment team whenever the add-on is requested on an existing install without needing to do a schema compare or publish for each database directly from the project?
Right now I am maintaining a separate set of .sql files in our (non db) project following the if not exists/create+alter paradigm that match the items in the db project. These get concatenated during build of our add on so that we can give our deployment team a script to run. This is proving to be cumbersome and we'd like to be able to make use of the database projects for this, if at all possible.
Best solution is to give the dacpacs to your installers. They run SQLPackage (maybe through a batch file or PowerShell) to point it at the server/DB to update. It would then generate the script or update directly. Sounds like they already have access to the servers so should be able to do this. SQLPackage should also be included on the servers or it can be run locally for the installer as long as they can see the target DB. This might help: schottsql.wordpress.com/2012/11/08/ssdt-publishing-your-project
There are a couple of examples of using PowerShell to do this, but it depends on how much you need to control DB names or Server names. A simple batch file where you edit/replace the Server/DB Names might suffice. I definitely recommend a publish profile and if this is hitting customer databases they could have modified, setting the "do not drop if not in project" options that show up is almost essential. As long as your customers haven't made wholesale changes to core objects, you should be good to go.
I have a Web application that I usually deployed using Web Deploy directly from Visual Studio (whatever branch I am currently using in VS - normally master). But now I'm introducing a second web app on Azure that will be built from the same repo but different branch. To make things simpler I will be configuring both Web apps on Azure to integrate directly with GitHub and associate them with specific branch.
I also added two additional web.config files: Web.Primary.config and Web.Secondary.config and configured app settings on Azure portal of each web app by adding additional value SCM_BUILD_ARGS and set them to
SCM_BUILD_ARGS=-p:PublishProfile=Primary // in primary web app
SCM_BUILD_ARGS=-p:PublishProfile=Secondary // in secondary web app
which I understand will transform correct config file with specific external services' configurations (DB connection, mail server, etc.).
Now the additional step that I would like to include in continuous deployment is run a set of SQL scripts that I have in my repo that I used to manually upgrade database during Web Deploy in VS. Individual scripts are actually doing specific database upgrade steps:
backup current tables - backup creates a set of Backup_OriginalTableName tables that are copied from existing ones and populated with existing data
drop whole DB model - all non-backup objects are being dropped from procedures, functions, types, views, tables...
create model - creates all tables, views and indices
create user types
create user functions
create stored procedures
restore data to new tables from backup tables - this step may occasionally break if we introduce new non-nullable columns to tables in the new model don't have defaults defined on them; I will somehow have to mitigate this problem by adding an additional script that will add missing columns to backup tables and give them some defaults, but that's a completely different issue.
I used to also have a set of batch files (BAT) in my VS solution that simply executed sqlcmd against specific database instance and executed these scripts in predefined order (as above). Hence I had batches:
Recreate Local.bat - this one used additional SQL scripts to not restore from backup but rather to recreate an empty DB with only lookup tables being populated and some default data for development purposes (like predefined test users)
Restore Local.bat - I used this script to simply restore database from backup tables discarding any invalid data I may have created while debugging/testing since last DB recreate/upgrade/restore
Upgrade Local.bat - upgrade local development DB executing scripts mentioned above
Upgrade Production.bat - upgrade production DB on Azure executing scripts mentioned above
So to support the whole deployment process I was now doing manually in VS I would now like to also execute these scripts against specific Azure SQL DB during continuous deployment. I suppose I should be running these right after code deployment because if that one fails, DB shouldn't be upgraded either.
I'm a bit confused where and how to do this? Can I configure this somewhere in Azure portal? I was looking for resources on the Web but I can't seem to find any relevant information how to do additional deployment steps to execute these scripts. I think this is some everyday scenario as it's hard to think of web apps not requiring databases these days.
Maybe it's just my process that is wrong for DB upgrade/deployment so let me also know if there is any other normal way that does DB upgrade/migration with continuous deployment on Azure... I may change my process to accommodate for this.
Note 1: I'm not using Entity Framework or any other full blown ORM. I'm rather using NPoco and all my DB logic is built in SPs that DAL is using.
Note 2: I'm aware of recently introduced staging capabilities of Azure, but my apps are on cheaper plan that doesn't support staging and I want to keep it this way as I may be introducing additional web apps along the way that will be using additional code branches and resources (DB, mail etc.)
It sounds to me like your db project is a good candidate for SSDT and inclusion in source control. You can create a MyDB.sqlproj that builds your db as a dacpac, and then you can use SqlPackage.exe Publish to accomplish your deployment to Azure.
We recently brought our databases under source control and follow a similar process to build and automatically deploy them (but not to a SQL Azure DB). We've found the source control, SSDT tooling support, and automated deployment options to be worth the effort of setting up and maintaining our project this way.
This SO question has some good notes for Azure deployment of a dacpac using SSDT:
How to publish DACPAC file to a SQL Server database project via SQLPackage.exe of SSDT?
I'm in the middle of doing a personal project and would like to create a system of three components.
A simple form application that would allow the user to input data into a database.
A database of multiple tables.
An excel spreadsheet that queries the database.
At this point in the project, there is a desire for the database to be stored on the PC of the person working on the project and for all three components of the project to have the ability to be zipped up in a folder and emailed around. I know how to code well enough to query databases from applications and excel, but how can I go about creating a database that can be stored in a specific folder so it can be emailed around?
Thanks!
Look into sql compact edition
http://en.wikipedia.org/wiki/SQL_Server_Compact
http://xldennis.wordpress.com/2010/08/30/using-sql-server-compact-edition-database-with-excel/
MS Access might also be an option here.
With an ASP.NET MVC3 application, how do I deploy the database into production, and how do I manage the schema changes?
When I'm developing my application, I see an aspnetmvc.mdf (and .ldf) file in app_data. This has the aspnet._ tables, and also my tables (which I hand-created in SQL Server Express). This file is 10MB, and it doesn't seem to me that I should simply upload it to my production machine.
Should I instead keep schema (and seed data) changes in a .SQL file and (somehow) run them on the server? Should I use NHibernate's methods for auto-generating tables? (If so, what about the ASP.NET standard tables?)
What's the best way to manage this? Ideally, I'd like something like LiquiBase or Rails' DB migrations, where I can isolate changes and run them in isolation. But I've never put a from-scratch ASP.NET MVC site into production, so I'm not sure what to do.
My thoughts on NHibernate's Schema Update are here.
There is no one right solution, but SchemaUpdate can get you about 90% of the way there. For the other 10%, I currently use hand-written sql files (named by the date they were created), but there are other, more sophisticated options (such as RedGates SqlCompare or the data tools built into some versions of Visual Studio).
I have a client who owns a business with a handful of employees. He has a product website that has several hundred static product pages that are updated periodically via FTP.
We want to change this to a data-driven website, but the database (which will be hosted at an ISP) will have to be updated from data on my client's servers.
How best to do this on a shoestring? Can the database be hot-swapped via FTP, or do we need to build a web service we can push changes to?
Ask the ISP about the options. Some ISPs allow you to ftp upload the .mdf (database file).
Some will allow you to connect with SQL management studio.
some will allow both.
you gotta ask the ISP.
Last time I did this we created XML documents that were ftp'd to the website. We had an admin page that would clear out the old data by running some stored procs to truncate tables then import the xml docs to the sql tables.
Since we didn't have the whole server to ourselves, there was no access to SQL Server DTS to schedule this stuff.
There is a Database Publishing Wizard from MS which will take all your data and create a SQL file that can then be run on the ISP. It will also, though I've never tried it, go directly to an ISP database. There is an option button on one of the wizard screens that does it.
it does require the user to have a little training and it's still a manual process so mabe not what you're after but i think it will do the job.
Long-term, building a service to upload the data is probably the cleanest solution as the app can now control it's import procedures. You could go grossly simple with this and just have the local copy dump some sort of XML that the app could read, making it not much harder than uploading the file while still in the automatable category. Having this import procedure would also help with development as you now have an automated and repeatable way to sync data.
This is what I usually do:
You could use a tool like Red-Gate's SQL Data Compere to do this. The tool compares data between two catalogs (on same or different servers) and generates a script for syncing them.