I've read in many places that renaming a branch is rather problematic in TFS 2010 : you may lose the history of the branch you just renamed ( as seen in this article or in this SO question )
I cannot find any mention of those problems in TFS 2012. Are there any consequences I should be aware of before renaming a branch in TFS 2012 ?
The biggest problem with renaming a branch, is that you will effectively be performing a baseless merge next time you merge to or from the renamed branch. This can cause a lot of pain.
I'm currently trying to untangle such a mess at the moment and its not pleasant. (Branch was renamed 4 months ago. The first merge from the branch was partial) its a nightmare I wouldn't wish on my worst enemy (who coincidently are the devs who renamed the branch and did the partial merge)
See this answer for more info
DON'T DO IT!!! You might be able to rename it on the server, but from my experience TFS wants to check every file out... basically treating it like a copy.
You can do it, but depends what situation you're in. For my situation, I have the following structure:
Development
ProjectX
ProjectY
Main
Release
ProjectX is getting released sooner than ProjectY and it was merged to Development-->Main a week ago. Now, the name ProjectX isn't relevant anymore and also, there's a new project starting with a name ProjectZ so, I'm going to rename ProjectY to ProjectZ and rename ProjectX to ProjectY.
Both X, Y and Z are to be merged fully once they move to the standard release cycle so I don't have to worry about merging piece by piece.
Related
We want to clean up our database schema and drop/delete objects which are no longer being used.
We suspect that sometime in the future we'll want to resurrect the removed functionality.
We've discussed the following options for dealing with dropped objects in version control:
Deleting the .sql files from source control once they are gone from the database and relying on the version history to store the definitions. Our concern with this approach is that sometime over the years source control will be moved and we will lose the history. It also seems difficult to know what to look for to recover if we can't see all the dropped objects.
Leaving the .sql files in source control but updating the definitions to "drop proc {someproc}". With this approach we our concerned about leaving the objects in version control which no longer exists and also the risk to losing the history if the vcs was moved
Creating a new repo for dropped objects and migrating .sql files to this repo once they have been dropped from SQL Server.
We're working in a windows environment and are fairly new to working with VCS for databases. Currently GIT + SSDT.
Currently option 3 is our preferred approach.
I see this a lot with database code, what happens is over time people end up with stuff in the database that is either not used or just does not work (think a proc that references a table and the table is modified but not the proc).
The thing to do is to get everything in source control (which it looks like you have) and then create a tag or branch of all the code before and after deleting it so you can get it back.
Two things normally transpire, either the code was genuinely never used or it was used at year end and when you find out, the world is about to fall on your head so better have a quick way to get it back.
Of course if you had a full suite of tests then even the year end process would be safe :)
I personally wouldn't use option 3, I would just keep the history in the main branch so you keep the history with it.
ed
There are a lot of good tools for versioning database changes: you have a big chance to get this question closed with "Too broad" reason, but I'll try to suggest to
Read about, understand and try to add Liquibase to your Development-Toolbox
Adopt your workflow for using this additional layer - technically it will be one more file (changelog in terms of Liquibase) in changesets, where you changing DD and|or data.
These changelogs provide good and smooth way of moving back and forth in linear history of changes in databases, not so good (or I don't know The Right Way) for direct jumping between nodes of diverged history, but it seems not your case
From your options-list it will be more p.1, than others (but it's storing changes in database in version-contol, not states)
Just to note another option, in SSDT you can mark the file property as Build Action = None. The file won't be included in the dacpac when this build option is selected. But I tend to agree with the idea that you should rely on your VCS to handle history.
We have constant problem with project XML file (*.sqlproj). If the files are added/renoved/changed location then it automatically adds/removes records in some unexpected places. After that we have big troubles by merging it when somebody changes that file also.
We came to conclusion that we might sort it before checkin. We would alphabetically sort it and in that case merge tool will understand it much better.
So, my questions would be:
Is it possible to re-arrange sqlproj file somehow before EVERY check-in? Maybe there are somekind of options/tools that doing that already?
Are there any other ways to make developers life easier?
UPDATE:
Once again I got the same problem. sqlproj file was modified 3 times and I want to merge to production only the last change, other 2 are not tested yet. in the merge tool I have the option to add all these 3 new objects or leave it without changes. I am not able to select only the last change ...
EXAMPLE:
developerA created tableA and checked in;
developerB got the latest version of dev branch, created tableB and checked in;
developerC got the latest version of dev branch, created tableC and checked in. DeveloperC tested the code and ready to go to production. He tries to merge his code to QA and get's the conflict where he has an option only to go with ALL changes.
I understand the scenario you are running into very well. This typically happens when you have multiple work streams happening in the context of a single repository and you don't have a common promotion schedule (as in all work will go to QA at the same time and PROD at the same time).
There's a few ways I can think to get around this problem and there are pros and cons to each option.
Lock each environment until everything can promote together. Not realistic in most cases.
When you are ready to promote, create a promotion branch from source environment and take things out of the promotion branch that aren't ready to promote to destination environment. This allows devs to keep working and be able to promote without freezing.
Hybrid approach... Don't source control anything in Dev until it's ready to promote to test. Then either do option #1 or 2 from there onward.
Create a more flexible ecosystem that can spin up an environment for each Feature branch in order to demo/test with others(or at least allocate/rotate enough between the developers to accomplish the same objective). Once it's accepted promote. This is what we are working towards currently but building out the infrastructure and process when you have a ton of interconnected databases and apps that share them is a bit challenging to say the least (especially in the Microsoft world).
Anyways hopes this helps...
1 - what source control are you using? No source control that I am aware of understands the context of sqlproj files but this isn't normally a problem.
2.a - This shouldn't be a problem you get constantly, are you checking in/out regularly? I would only expect to see issues if different developers are making large scale changes to the projects and not checking out / checking in before and after.
2.b - It is also possible you are not merging correctly, if you take both both sets of changes then it is normally fine.
ed
I have a requirement posted by the development team to reverse all changes in a given UCM activity. Constraint being we do not have delete rights. Meaning I know I can do a lsactivity to list all elements in an activity with their respective versions and then in the easy world would be able to delete those versions.
But the SCM policy does not permit us to delete/rmver anything. So I am left with back merging 1 version back. Meaning let us say I have version 5 of a.java checked into an activity. One way I think to achieve this, is find version 4 (using -predecessor) and blind copy this ver 4 as ver 6. Assume that each file has only 1 version in an activity this time. If a file had more than 1 versions checked in through an activity, this would be more complex, so lets ignore that for now.
Any other ideas or whether my approach would/would not work ?
One more robust way would be to:
list all files in an activity
for each file, find the oldest version
make a negative merge, or subtractive merge.
A subtractive merge can be performed to exclude or bypass bad versions on a branch without actually removing the bad versions.
Cleartool merge using the -delete option will allow a user to merge from the last known good version to a new version on the same branch which excludes the work done in the versions identified as bad versions.
That would be compliant with your SCM policies in place.
That is in essence what does the cset.pl script mentioned by Tamir, as I explain in "Clearcase: how to rollback all changes on specific branch?"
ccperl cset.pl -undo myActivity
WARNING: LONG QUESTION.
[QUESTION]
If the strategy is to have a branch per database, as described in the problem below, where scripts are version controlled.
How do you manage the data migration issues when trying to consolidate to fewer branches?
Is it just a cost you incur as part of data migration?
Essentially transform scripts will have to be created at the time of migration.
Is there a better way?
Can we have both issues resolved at the same time?
What is the best practice?
[BACKGROUND]
At my work place we have a product which has 3 branches. Mainline having the "LATEST AND GREATEST" changes which is not necessary ready for release.
Version B (names have been changed to protect the guilty)
Version A (names have been changed to protect the guilty)
Mainline
Because of these branches there is effectively 3 versions of the database.
Code version control is fairly easy however database version control seems difficult.
Having read Do you use source control for your database items?
it seems the best way is to export all the create scripts for each object/table.
NOTE: How you manage it, in one big script or multiple scripts or a hybrid, is your preference according to the article.
I agree with this and have inquired as to why it's not done.
Currently the DBAs refuse to branch the scripts into branches.
Aside from laziness as an excuse the reason is to save time with data migration.
Effectively the database changes are forcibly maintained across all versions.
All the scripts are version controlled and maintain only in mainline.
Version A and Version B have their own special file that states which change scripts to run on their respective branch. The problem arises when there is a change script, for instance applied to Version A but Version B only requires part of the changes. It is up to the developer to inform the DBAs to update the file which indicates which patches to apply for each branch. For change scripts which does too much manual intervention is needed to manually apply part of the change script.
To update a database on Version A all patches are extracted with Version A's which patch to apply file.
[SCENARIO]
The 3 versions above exist.
Database changes occur to Version A.
Branch consolidation where the code is merged from Version B to A so that Version B can be removed.
The same needs to happen with the database.
Hope this makes sense.
Take a look at Chapter 8 in Eric Sink's Source Control HOW TO. It's a great resource for understanding the ins and outs of source control.
Can anyone provide some real examples as to how best to keep script files for views, stored procedures and functions in a SVN (or other) repository.
Obviously one solution is to have the script files for all the different components in a directory or more somewhere and simply using TortoiseSVN or the like to keep them in SVN, Then whenever a change is to be made I load the script up in Management Studio etc. I don't really want this.
What I'd really prefer is some kind of batch script that I can run periodically (nightly?) that would export all the stored procedures / views etc that had changed in a given timeframe and then commit them to SVN.
Ideas?
Sounds like you're not wanting to use Revision Control properly, to me.
Obviously one solution is to have the
script files for all the different
components in a directory or more
somewhere and simply using TortoiseSVN
or the like to keep them in SVN
This is what should be done. You would have your local copy you are working on (Developing new, Tweaking old, etc) and as single components/procedures/etc get finished, you would commit them individually until you have to start the process over.
Committing half-done code just because it's been 'X' time since it was last committed is sloppy and guaranteed to cause anyone else using the repository grief.
I find it best to treat Stored Procedures just like any other compilable code: Code lives in the repository, you check it out to make changes and load it in your development tool to compile or deploy the code.
You can create a batch file and schedule it:
delete the contents of your scripts directory
using something like ExportSQLScript to export all objects to script/scripts
svn commit
Please note: That although you'll have the objects under source control, you'll not have the data or it's progression (is that a renamed field, or 1 new field and 1 deleted?).
This approach is fine for maintaining change history. But, of course, you should never be automatically committing to the "production build" (unless you like broken builds).
Although you didn't ask for it: This approach also won't produce a set of scripts that will upgrade a current DB. You'll only have initial creation scripts. Recording data progression and creation upgrade scripts is beyond basic source control systems.
I'd recommend Redgate SQL Compare for this - it allows you to compare database versions and generate change scripts - it's also fairly easily scriptable.
Based on your expanded question, you really want to use DDL triggers. Check out this article that details how to create a changelog system for your database.
Not sure on your price range, however DB Ghost could be an option for you.
I don't work for this company (or own the product) but in my researching of the same issue, this product looked quite promising.
I should've been a little more descriptive. The database in question is for an internal ERP system and thus we don't have many versions of our database, just Production/Testing/Development. When we've done a change request, some new fancy feature or something, we simply execute a script or series of scripts to update the procedures in question on the Testing database, if that is all good, then we do the same to Production.
So I'm not really after a full schema script per se, just something that can keep track of the various edits to the stored procedures over time. For example, PROCESS_INVOICE does stuff. It gets updated in some minor way in March. Some time later in say May it is discovered that in a rare case customers get double invoiced (or some other crazy corner case). I'd like to be able to see what has happened over time to this procedure. Currently the way the development environment is setup here I don't have that, which I'm trying to change.
I can recommend DBPro which is part of Visual Studio Team Edition. Have been using it for a few months for storing all parts of the database in Team Foundation Server as well as for deployment and database compares, etc.
Of course, as someone else mentioned, it does depend on your environment and price range.
I wrote a utility for dumping all of the relevant parts of my db into a directory structure that I use SVN on. I never got around to trying to incorporate it into the Manager but, if you're interested, it's here: http://www.reluctantdba.com/dbas-and-programmers/sqltools/svnforsql2005.aspx
It's free and, since I regularly run it, you know any bugs get fixed quickly.
You can always try integrating SourceSafe with SQL Server. Here's a quick start : link . To work with it you've got to have Managment Studio Developers Edition.