SVN server Auto Synchronization with Local Database # Runtime - database

I am using ProjectLocker as my SVN server.Stuck at Files Synchronization at run time with Local DB Files. I am using Tortoise SVN.

From your comments, it sounds like you may not be familiar with some version control concepts. For new Subversion users, I recommend Chapter 1 of the Version Control With Subversion book. This will explain what a working copy is in more detail, and how Subversion keeps your data. Chapter 2 has more information on a basic work cycle. ProjectLocker takes care of all the svnadmin steps for you, so you can ignore those and look at how to check out, update, and commit.
The first thing you should do is to create a staging directory where you keep any files that you're doing development on. You may need to copy your PHP, CSS, DB files and so on to that location. You then run the TortoiseSVN equivalent for svn import to upload all the files to your server. Once you've imported them, back up the directory you just created, and create an empty working directory. Run the TortoiseSVN equivalent of Subversion checkout and you will pull down all the files in your repository. Once you have that, Subversion will take care of identifying which changes can be merged and which will need manual intervention as you make changes, run updates to pull changes from other users, and commit.
If you wish to upload files to a remote location after commits and you have a paid account, you can use ProjectLocker's remote deployment solution to FTP a particular Subversion directory over to your actual server for deployment.
I apologize if this is a little vague, but the scope of your question is quite broad, and so I wanted to give you as concise an answer as possible while still addressing your needs.

Related

Managing different publish profiles for each developers in SSDT

In our current dev. workflow there is main database --> DbMain. There is the process that takes the latest version of the project and automatically deploys it there and after that it triggers unit tests. As we would like to always have working version of the project in the source control each developer should be sure that he checks in the working code and all tests would be passed.
For this purpose we decided to create individual databases for each developers that has following naming convention --> DbMain_XX (where XX are the developers initial). So every developer before the check-in is suppose to publish all the changes to that database manually and run the unit tests. It is useful to setup publish config for this purpose with that is the copy of the main publish config with the only difference in the database names.
That would introduce that we will have a lot of different publish profiles in the solution that is quite a mess.
If we will not add these profiles to the source control, then .sqlproj file would still have reference to these files so the project will have reference to the not existing files.
So the actual question. Can I have single publish profile for all developers where the database name will be changed using variables? For example DbName_$(dev_initials)? Or can we have that each developer would have their own publish configs only locally and it wouldn't break the project?
UPDATE:
According to the Peter Schott comments:
I can create local publish profile, but if I don't add it to the source control, then the still be an entry in sqlproj file, but the file itself will be unavailable.
Running tests locally have at least 2 disadvantages. The first one is that everybody is supposed to install SQL Server locally. We are mainly working via virtual machines and the disk space is quite limited there. Another thing is that developers will definitely forget or not will not run tests manually every time. Sometimes they will push changes to the repo without building it or/and running tests. We would like to avoid such situations and "catch" failed build as soon as possible.
Another approach that was mentioned is to have 1 common build database. And in my case we have one (DbMain). All of developers can use it for it's needs but we will definitely catch the situation when the 2 developers will publish at the same time and that can make a lot of confusion by figuring out what's really went wrong.
A common approach to this kind of thing - not only for SSDT publish profiles but for config files in general - is to commit a generic version of the file with a name something like DbMain.publish.xml.template, and provide instructions to the developer to rename the file to DbMain.publish.xml - or whatever - and .gitignore this local copy of the file, allowing the developers to make whatever changes they want, but inherit the common settings from the .template version of the file.
Publish profiles don't need to be added to the .sqlproj to be used at deploy time, this is merely a convenience in Visual Studio to make them easier to find and edit, so you don't need to worry about broken references.
You are right in wanting to avoid multiple developers publishing to a common "build" database, this is a recipe for frustration.
Really, you want the "build" database to be published to as part of your CI process, meaning after the developers have pushed their changes.

Deploying relevant magento backend changes

I'm thinking of a good deployment strategy for magento. I already have managed to deploy code with git from my local installation to my stage server. (The jump to live is not a problem then)
Now I'm thinking about how to deploy backend changes like the following:
I'm adding a new attribute set and I want it to be available on my stage and later the live server. Since these settings are in the database, I could just do a mysqldump and restore this dump on my stage/live systems.
But I can't do this, since the database has more data like orders, articles (with current stock availability) and a lot more stuff which I don't want to deploy from my testing system.
How are others handling this deployment "problem"?
After some testing, I chose the extension Mageploy, which is nicely to install via modman (I prefer modgit which relies on the same data for installation) and already captures a lot important backend settings.
If you need more, it's possible to extend it to more backend settings by yourself (and then contribute to the git project. Pullrequests are concidered quickly)

Integration of different works by different people in moodle

We are developing a moodle site. We are a group of 5 people and each one is working on different module locally. But now we wwant to integrate the work of all in one machine or server. Is there any way to version control it or integrate it as the databse of each one is different because of different data. Please provide the solutuion as early as possible.
It is not completely clear as to whether you are separately working on the content of the site or the code for the new site, so I will attempt to answer both questions.
For content the easiest way to integrate it all together into one site is to use the Moodle backup and restore mechanism ( http://docs.moodle.org/26/en/Course_backup ) - backup each of the courses and then restore them onto the main site. If you have a lot of courses to transfer, then it may make more sense to write some code to automate certain aspects of this, but that can be quite a bit of work, so usually it is easier to just manually do the backup and restore.
For code the answer is Git. All the core Moodle code is version controlled via git. Make sure that each developer is working with their own clone of your main git repository (you can find the core Moodle repository at . Once they have committed each of their changes, then they can be pushed (to a central repository) or pulled to your production site. Read more at http://docs.moodle.org/dev/Git_for_developers
Note that if the code for each module has been written with the proper DB installation / upgrade code ( http://docs.moodle.org/dev/Upgrade_API ) then it should simply be possible to take the code from each of the developed modules, put them together into one codebase and then create a fully-working fresh install. Once you have that, you should be able to use backup and restore to transfer any required courses from the development servers to the live server.

Are git submodules a good solution for storing a large DB dump?

I.e., we have a 20MB bzip2 sql file of development data that we'd like to have versioned along with our development code.
However, we don't want this file pulled down from the repo by default with every fresh clone/fetch.
One solution seems to be storing this large file in a separate repo and then link to it with a submodule. Then, a developer would fetch the db file only when they need to retrieve and reset their development database. And then, when there's a schema change, the database file would be updated, committed to the external repo, and the submodule updated.
Is this a good development workflow? Or is there a better way of doing this?
EDIT: The uncompressed SQL dump is 360MB.
EDIT: Github says "no", don't do this:
Database dumps
Large SQL files do not play well with version control systems such as
Git. If you are looking to provide your developers with the most
recent production dataset, we recommend using Dropbox for sharing
files like these among your developers.
I ended up making a simple web server serve the schema dump directory from the repo where dumps are stored. The repo grew really quickly because the dumps are large, and it was slowing people down just to clone it when they had to bring up new nodes.

How do you manage your run once sql install scripts in subversion?

I'm working at a company that does several releases to production every year and during the build up to each release we gather up a collection of 1 time sql install scripts like table creation and dataports.
The way things currently work is that after the release to production, we branch, tag then we delete all 1 time scripts from subversion.
This seems to get the job done but to me it never seemed like the proper way to solve the problem.
Could you imagine deleting all your sourcecode every release and then writing patches for production?
The downsides that I see is if you want to reference and old script you have to checkout a tag or branch from subversion.
Our SVN Repo currently looks something like this
svnrepo/mywebsite/src
svnrepo/mywebsite/database/storedprocs
svnrepo/mywebsite/database/installscripts
I was thinking that a more accurate way to model what we want to do in SVN is the following.
Use an svn:externals attribute to point to the latest version. Then after every release just point it to the latest.
svnrepo/mywebsite/trunk/src/
svnrepo/mywebsite/trunk/src/database/installscripts/
-> svnrepo/mywebsite/trunk/database/Release_3
svnrepo/mywebsite/trunk/database/Release_1
svnrepo/mywebsite/trunk/database/Release_2
svnrepo/mywebsite/trunk/database/Release_3
Using this model we no longer svn delete any sql scripts and enable a database developer to check out svnrepo/mywebsite/trunk/database/ and easily view all the database development that has occurred.
Any comments on my ideas, the current structure, or the best way to manage this situation?
Thanks
Synchronising database changes and code changes in subversion is hard
If you have the option of building the Database from scratch you can put the whole DDL into the repository along with the code, then you don’t need to worry about which changes go with which release.
Looking at your situation I don’t think you need to use externals (they can cause headaches). You also don’t need to delete everything. It is not too difficult to check out a branch (or you could just use a repository browser).
You could even put the old db releases into a separate tag when you release so they are all in one place, which the database people can have checked out. If you are doing releases once a year this won’t be hard.
This question may also help

Resources