Sitefinity DB Migration without Sitefinity's Sync Module - database

Scenario:
Development Environment: Trial License for Sitefinity
Staging Environment: Purchased License from Sitefinity (but without Sync Module).
I am working on a project. Actually I developed some code & other contents (using Sitefinity's Admin panel including Templates, Pages, Images) locally.
Now I want to Migrate/Sync the new changes to staging environment.
But Staging environment License dosen't has the facility to Sync.
I tried to use visual studio 2013 data comparison tool But using this I had PageId conflict. I debugged this issue found that some ID's ware different on both Databases, sply for Ownr field.
I want to know if I change this Field value & copy new records in Staging DB will solve my problem.
Or there's any other way to integrate my changes to Staging Database.
Thanks

If you haven't made changes to the staging environment, you could take a backup of your local database and reinstate it over the staging database. Its sounds like all your changes (templates, pages and images) are going to be stored in the database by default. Any configuration changes you might have made locally are going to be stored in the App_Data/Sitefinity/Configuration folder, so you might want to publish your local project and use a tool like Beyond Compare to view those differences and migrate those over to staging as well.
I agree, without the purchase of the Site Sync license keeping different Sitefinity environments in sync is a major headache. If I'm doing development work on a Sitefinity site that is in production, I end up having to recreate the page and db changes on that server once the changes are ready for the live environment.

Related

Azure continuous deployment from GitHub and database upgrades

I have a Web application that I usually deployed using Web Deploy directly from Visual Studio (whatever branch I am currently using in VS - normally master). But now I'm introducing a second web app on Azure that will be built from the same repo but different branch. To make things simpler I will be configuring both Web apps on Azure to integrate directly with GitHub and associate them with specific branch.
I also added two additional web.config files: Web.Primary.config and Web.Secondary.config and configured app settings on Azure portal of each web app by adding additional value SCM_BUILD_ARGS and set them to
SCM_BUILD_ARGS=-p:PublishProfile=Primary // in primary web app
SCM_BUILD_ARGS=-p:PublishProfile=Secondary // in secondary web app
which I understand will transform correct config file with specific external services' configurations (DB connection, mail server, etc.).
Now the additional step that I would like to include in continuous deployment is run a set of SQL scripts that I have in my repo that I used to manually upgrade database during Web Deploy in VS. Individual scripts are actually doing specific database upgrade steps:
backup current tables - backup creates a set of Backup_OriginalTableName tables that are copied from existing ones and populated with existing data
drop whole DB model - all non-backup objects are being dropped from procedures, functions, types, views, tables...
create model - creates all tables, views and indices
create user types
create user functions
create stored procedures
restore data to new tables from backup tables - this step may occasionally break if we introduce new non-nullable columns to tables in the new model don't have defaults defined on them; I will somehow have to mitigate this problem by adding an additional script that will add missing columns to backup tables and give them some defaults, but that's a completely different issue.
I used to also have a set of batch files (BAT) in my VS solution that simply executed sqlcmd against specific database instance and executed these scripts in predefined order (as above). Hence I had batches:
Recreate Local.bat - this one used additional SQL scripts to not restore from backup but rather to recreate an empty DB with only lookup tables being populated and some default data for development purposes (like predefined test users)
Restore Local.bat - I used this script to simply restore database from backup tables discarding any invalid data I may have created while debugging/testing since last DB recreate/upgrade/restore
Upgrade Local.bat - upgrade local development DB executing scripts mentioned above
Upgrade Production.bat - upgrade production DB on Azure executing scripts mentioned above
So to support the whole deployment process I was now doing manually in VS I would now like to also execute these scripts against specific Azure SQL DB during continuous deployment. I suppose I should be running these right after code deployment because if that one fails, DB shouldn't be upgraded either.
I'm a bit confused where and how to do this? Can I configure this somewhere in Azure portal? I was looking for resources on the Web but I can't seem to find any relevant information how to do additional deployment steps to execute these scripts. I think this is some everyday scenario as it's hard to think of web apps not requiring databases these days.
Maybe it's just my process that is wrong for DB upgrade/deployment so let me also know if there is any other normal way that does DB upgrade/migration with continuous deployment on Azure... I may change my process to accommodate for this.
Note 1: I'm not using Entity Framework or any other full blown ORM. I'm rather using NPoco and all my DB logic is built in SPs that DAL is using.
Note 2: I'm aware of recently introduced staging capabilities of Azure, but my apps are on cheaper plan that doesn't support staging and I want to keep it this way as I may be introducing additional web apps along the way that will be using additional code branches and resources (DB, mail etc.)
It sounds to me like your db project is a good candidate for SSDT and inclusion in source control. You can create a MyDB.sqlproj that builds your db as a dacpac, and then you can use SqlPackage.exe Publish to accomplish your deployment to Azure.
We recently brought our databases under source control and follow a similar process to build and automatically deploy them (but not to a SQL Azure DB). We've found the source control, SSDT tooling support, and automated deployment options to be worth the effort of setting up and maintaining our project this way.
This SO question has some good notes for Azure deployment of a dacpac using SSDT:
How to publish DACPAC file to a SQL Server database project via SQLPackage.exe of SSDT?

Syncing magento database froms development to production

I use git for version control. I have a development, staging and production environment. When I finish in development I push to staging for review by the client. When approved, I push changes from staging to production. That works fine as long as there is no database changes. What happens if I install modules via Magento connect on local development and it makes database modifications.
How would I push those changes up to the production server since the production server is always changing?
Edit:
I wrote two shell scripts. One that pulls the production database down to my development server, replaces base url with develpment url and updates my development db accordingly. It also leaves the production sql dump behind to be added to my git repo. I'm not really sure if it's beneficial to keep the raw dumps in source control but I'm going to try it out. The second scripts moves the development database up to staging and essentially performs the same operations as the first.
Now when it comes time to move to production I pull the updated production repo into the production server and allow magento to do it's thing. I also started using SQLYog recently and it has a database comparison wizard which will give me the differences in my development and production databases and allow me to merge the changes in selectively. It always creates a migration script that I added to source control as well. If anything goes wrong I can run the comparison to see if anything was missed.
Does this sounds like a decent workflow to you guys?
This is a common situation for developers. It's much easier to modify code and schema and be assured that all is well when there is a small codebase which is thoroughly understood and doesn't have too much flexibility for UI. Of course, this is not the case with Magento, which can be quite difficult to work into automated testing and continuous integration schemes. That said, there are some knowable, testable behaviors on which you can rely.
An Overview
When dealing with local development which is merged to production, one must be assured that the schema and data changes relevant to new or changed functionality are also applied when the filesystem is updated. This is actually how Magento itself works. Module configuration files can supply a version number and can configure setup resources. This information is used to enter into a schema & data modification workflow which results in version information being added to the database. It is the consistency between file-designated version number and database-registered version number that one can / the system can infer that the database is in the appropriate state given the files present.
This means that when the new/updated module files are merged to production and the necessary conditions are met (e.g. the config cache is invalid, etc.), the database upgrade should take place. Your (proper) concern is that this process might break based on remote server-level differences, remote data differences, etc. Without a tightly-regulated integration testing process, there is some overhead.
Plan of Attack: Pick the Right Strategy
The essential activity in this area is assessing the areas of module's impact on the database. This should be straightforward with any module which is worthy of being installed; check for any of the following:
A system.xml file
Existence of install/upgrade scripts in sql or data folders
Existence of custom setup resource class (configured under global/resources xpath)
Appropriate configuration XML (version number in module config node & a setup resource under global/resources xpath)
For 1, simply review the structure and know that its effects on the database will be limited to the core_config_data table, and generally only once an admin has saved values via the GUI (noting that 1. below applies as well).
For 2 & 3, review the scripts which are set to be run. These can be divided into three general areas:
1. Configuration settings - look for setConfigData() and deleteConfigData() calls
2. Table additions and edits (new tables, adding columns, etc.)
3. EAV-related changes and additions; look for EAV setup resources
4. Non-EAV data changes: installation of new data or modification of existing data
It's a matter of feel & intuition, but gauging the level of impact on the db will allow you to determine if you should clone production data down to local dev and test the setup workflow locally, verifying it works okay, then pushing to production and re-checking (backing up always!). If the changes are wide-ranging, it may be best to take the site offline so that you can ensure that you won't lose order or customer data if you need to revert after a botched upgrade.
You generally don't ever want to push data contained in a db from dev > prod. Your schema defs should be contained in Magento sql install scripts. If you do have actual new data you want to push up to prod, you'll have to do so on a case-by-case basis. You will most likely pull down from prod > dev to test out data and configuration before running the actual case on prod.
Case - 1:
If your production server has the same data (DB) which you have in the local, then just copy the database and files to the production server and do the the following:
1) Delete the content of the folder /var
2) Change the values of the file /app/etc/local.xml
There you can find your connection string data (database user, host and name).
3) Once you got your database uploaded, you need to make some changes.
Run this query:
SELECT * FROM core_config_data WHERE path = 'web/unsecure/base_url' OR path = 'web/secure/base_url';
you will get 2 rows. update these rows by Run this query
UPDATE core_config_data SET value = 'YOUR_NEW_LIVE_URL' WHERE path LIKE 'web/%/base_url';
That’s all.
Case - 2:
If you don't want to change the DB data's in production, then you need to install the modules via megento connect directly to the production server. And you can update the files which you have changed in Local.

Proper structure of asp.net website and database in visual studio

My main problem is where does database go?
The project will be on SVN and is developed using asp.net mvc repository pattern. Where do I put the sql server database (mdf file)? If I put it in app_data, then my other team mates can check out the source and database and run it with the database being deployed in the vs instance.
The problem with this method are:
I cannot use SQL Management Studio with this database.
Most web hosts require me to deploy the database using their UI or SQL Management studio. Putting it in App Data will make no sense.
Connection String has to be edited each time I'm moving from testing locally to testing on the web host.
If I create the database using SQL Management studio, my problems are:
How do I keep this consistent with the source control (team mates have to re-script the db if the schema changes).
Connection string again. (I'd like to automatically use the string when on production server).
Is there a solution to all my problems above? Maybe some form of patterns of tools that I am missing?
Basically your two points are correct - unless you're working off a central database everyone will have to update their database when changes are made by someone else. If you're working off a central database you can also get into the issues where a database change is made (ie: a column dropped), and the corresponding source code isn't checked in. Then you're all dead in the water until the source code is checked in, or the database is rolled back. Using a central database also means developers have no control over when databsae schema changes are pushed to them.
We have the database installed on each developer's machine (especially good since we target different DBs, each developer has one of the supported databases giving us really good cross platform testing as we go).
Then there is the central 'development' database which the 'development' environment points to. It is build by continuous integration each checkin, and upon successful build/test it publishes to development.
Changes that developers make to the database schema on their local machine need to be checked into source control. They are database upgrade scripts that make the required changes to the database from version X to version Y. The database is versioned. When a customer upgrades, these database scripts are run on their database to bring it up from their current version to the required version they're installing.
These dbpatch files are stored in the following structure:
./dbpatches
./23
./common
./CONV-2345.dbpatch
./pgsql
./CONV-2323.dbpatch
./oracle
./CONV-2323.dbpatch
./mssql
./CONV-2323.dbpatch
In the above tree, version 23 has one common dbpatch that is run on any database (is ANSI SQL), and a specific dbpatch for the three databases that require vendor specific SQL.
We have a database update script that developers can run which runs any dbpatch that hasn't been run on their development machine yet (irrespective of version - since multiple dbpatches may be committed to source control during a single version's development).
Connection strings are maintained in NHibernate.config, however if present, NHibernate.User.config is used instead, however NHibernate.User.config is ignored from source control. Each developer has their own NHibernate.User.config, which points to their local database and sets the appropriate dialects etc.
When being pushed to development we have a NAnt script which does variable substitution in the config templates for us. This same script is used when going to staging as well as when doing packages for release. The NAnt script populates a templates config file with variable values from the environment's settings file.
Use management studio or Visual Studios server explorer. App_Data isn't used much "in the real world".
This is always a problem. Use a tool like SqlCompare from Redgate or the built in Database Compare tools of Visual Studio 2010.
Use Web.Config transformations to automatically update the connection string.
I'm not an expert by any means but here's what my partner and I did for our most recent ASP.NET MVC project:
Connection strings were always the same since we were both running SQL Server Express on our development machines, as were our staging and production servers. You can just use a dot instead of the computer name (eg. ".\SQLEXPRESS" or ".\SQL_Named_Instance").
Alternatively you could also use web.config transformations for deploying to different machines.
As far as the database itself, we just created a "Database Updates" folder in the SVN repository and added new SQL scripts when updates needed to be made. I always thought it was a good idea to have an organized collection of database change scripts anyway.
A common solution to this type of problem is to have the database versioning handled in code rather than storing the database itself in version control. The code is typically executed on app_start but could be triggered in other ways (build/deploy process). Then developers can run their own local databases or use a shared development database. The common term for this is called database migrations (migrating from one version to the next). Here is a stackoverflow question for .net tools/libraries to make this easier: https://stackoverflow.com/questions/8033/database-migration-library-for-net
This is the only way I would handle this on projects with multiple developers. I've used this successfully with teams of over 50 developers and it's worked great.
The Red Gate solution would be to use SQL Source Control, which integrates into SSMS. Its maintains a sql scripts folder structure in source control, which you can keep in the same folder/ respository that you keep your app code in.
http://www.red-gate.com/products/SQL_Source_Control/

managing Sql Server databases version control in large teams

For the last few years I was the only developer that handled the databases we created for our web projects. That meant that I got full control of version management. I can't keep up with doing all the database work anymore and I want to bring some other developers into the cycle.
We use Tortoise SVN and store all repositories on a dedicated server in-house. Some clients require us not to have their real data on our office servers so we only keep scripts that can generate the structure of their database along with scripts to create useful fake data. Other times our clients want us to have their most up to date information on our development machines.
So what workflow do larger development teams use to handle version management and sharing of databases. Most developers prefer to deploy the database to an instance of Sql Server on their development machine. Should we
Keep the scripts for each database in SVN and make developers export new scripts if they make even minor changes
Detach databases after changes have been made and commit MDF file to SVN
Put all development copies on a server on the in-house network and force developers to connect via remote desktop to make modifications
Some other option I haven't thought of
Never have an MDF file in the development source tree. MDFs are a result of deploying an application, not part of the application sources. Thinking at the database in terms of development source is a short-cut to hell.
All the development deliverables should be scripts that deploy or upgrade the database. Any change, no matter how small, takes the form of a script. Some recommend using diff tools, but I think they are a rat hole. I champion version the database metadata and having scripts to upgrade from version N to version N+1. At deployment the application can check the current deployed version, and it then runs all the upgrade scripts that bring the version to current. There is no script to deploy straight the current version, a new deployment deploys first v0 of the database, it then goes through all version upgrades, including dropping object that are no longer used. While this may sound a bit extreme, this is exactly how SQL Server itself keeps track of the various changes occurring in the database between releases.
As simple text scripts, all the database upgrade scripts are stored in version control just like any other sources, with tracking of changes, diff-ing and check-in reviews.
For a more detailed discussion and some examples, see Version Control and your Database.
Option (1). Each developer can have their own up to date local copy of the DB. (Up to date meaning, recreated from latest version controlled scripts (base + incremental changes + base data + run data). In order to make this work you should have the ability to 'one-click' deploy any database locally.
You really cannot go wrong with a tool like Visual Studio Database Edition. This is a version of VS that manages database schemas and much more, including deployments (updates) to target server(s).
VSDE integrates with TFS so all your database schema is under TFS version control. This becomes the "source of truth" for your schema management.
Typically developers will work against a local development database, and keep its schema up to date by synchronizing it with the schema in the VSDE project. Then, when the developer is satisfied with his/her changes, they are checked into TFS, and a build and then deployment can be done.
VSDE also supports refactoring, schema compares, data compares, test data generation and more. It's a great tool, and we use it to manage our schemas.
In a previous company (which used Agile in monthly iterations), .sql files were checked into version control, and (an optional) part of the full build process was to rebuild the database from production then apply each .sql file in order.
At the end of the iteration, the .sql instructions were merged into the script that creates the production build of the database, and the script files moved out. So you're only applying updates from the current iteration, not going back til the beginning of the project.
Have you looked at a product called DB Ghost? I have not personally used it but it looks comprehensive and may offer an alternative as part point 4 in your question.

How do I manage version control when developing with SQL Server Express?

I am developing a website using SQL Server Express on my development machine. My web hosting company is providing me with SQL Server 2005.
At the moment all I have is a database that I develop with and a database that is on the live server. I do not have the original scripts to generate the schema but I can auto generate the create scripts individually or for the entire database.
I am now putting my code into source control and I would like to know how I manage my database schema. What do I put into it? Create commands? Alter scripts?
The database is very small at the moment and it is not hard to maintain the two databases, but I am concerned that going forward but it will get out of hand. Do you have any tips for getting the live database in sync when deploying new code?
EDIT Any ideas as to what should go into source control? Should the DDL scripts go here?
Deploy schema changes as DDL upgrade scripts and, if you haven't already, add a table to contain the schema version number which you update at the end of each upgrade script.
EDIT: Yes, all your scripts should go into source control, including DDL scripts.
I typically keep a testing copy of the live database on my local or virtual development box, which I flush routinely from the prod database down to testing. The testing copy is meant for my total exploitation. When I have something that I believe is ready for deployment, I move it to my development dataset, which mirrors the prod db and is not used for playing around. If the development db passes all my tests, I deploy the script to the production db

Resources