Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 7 years ago.
Improve this question
I am responsible for clearcase at my project. I do not have much experience.
My issue is , right now our project structure in cc is a project with PROD,PV, ST, DV streams as seen here;
Link to the screenshot
Now as you can see we have individual stream for each developer under DV stream. The way we deliver code up stream is one by one. Now due to changes in the management, we have to implement in such a way that PARALLEL development is possible. i.e. if there is a bug in Prod and we want to fix it and deliver it back to prod, without delivering the current activities/baselines which are being worked on by different developers, how can we change our cc project to incorporate that?
we want to have something like
PROD (JAN release)
-PV (JAN release)
-ST (JAN, FEB release)
-DV (JAN, FEB, MAR release)
to manage JAN, FEB, MAR release separately. If we have to fix something in JAN release and do not want to include FEB and MAR releases, how can we do that?
It will be great if you can give us some insight as soon as possible.
individual stream for each developer under DV stream
WhoĆ¢t???? This is SPARTA! (err... no: madness: this is madness)
A stream represents a development effort, not a sandbox for a "resource" (i.e. "a developer"). Resources come and go, development tasks stay.
You should have a stream per development line, upon which many developers create their own view.
That way, if you need a parallel development, you only need to create one "brother" stream beside the current one, in order to rebase that bug-fix stream with a baseline from PROD.
So:
PROD
PV
ST
DV
PV-JAN
By creating PV-JAN, you create a stream dedicated to small evolutions to the baseline created for JAN.
And you do not have to create all those sub-streams per developer, since it would represent far too many deliver/rebase steps.
The 2 or 3 developers who need to fix anything on PV-JAN create their own view on the same stream. They will all participate to the same development effort (fixing bugs for the JAN release of PV)
Related
I've read in many places that renaming a branch is rather problematic in TFS 2010 : you may lose the history of the branch you just renamed ( as seen in this article or in this SO question )
I cannot find any mention of those problems in TFS 2012. Are there any consequences I should be aware of before renaming a branch in TFS 2012 ?
The biggest problem with renaming a branch, is that you will effectively be performing a baseless merge next time you merge to or from the renamed branch. This can cause a lot of pain.
I'm currently trying to untangle such a mess at the moment and its not pleasant. (Branch was renamed 4 months ago. The first merge from the branch was partial) its a nightmare I wouldn't wish on my worst enemy (who coincidently are the devs who renamed the branch and did the partial merge)
See this answer for more info
DON'T DO IT!!! You might be able to rename it on the server, but from my experience TFS wants to check every file out... basically treating it like a copy.
You can do it, but depends what situation you're in. For my situation, I have the following structure:
Development
ProjectX
ProjectY
Main
Release
ProjectX is getting released sooner than ProjectY and it was merged to Development-->Main a week ago. Now, the name ProjectX isn't relevant anymore and also, there's a new project starting with a name ProjectZ so, I'm going to rename ProjectY to ProjectZ and rename ProjectX to ProjectY.
Both X, Y and Z are to be merged fully once they move to the standard release cycle so I don't have to worry about merging piece by piece.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 7 years ago.
Improve this question
I have some closed-source code (written by me) that I'd like to keep for myself.
However I'd like to use some of that code in an open source project - which would also be written exclusively by me, but posted somewhere for anyone to use.
How can I accomplish something like this:
release open source version of code, which is free for anyone to
change, and keep or give back (I believe this is zlib license)
at the same time I'd like to have a version of the code for
myself,
and be able to incorporate any code changes from #1 (if someone
decides to fix up any code I release)
Is this as simple as having two code-trees, with different licenses attached to both? One zlib license, and one my own license?
Note, I'm a bit biased to using zlib license as it is short, and I believe I understand it and agree it is appropriate (I don't mind if someone uses code for commercial purpose).
The first part indeed requires two code trees. If you're using a distributed SCM, like git, mercurial or bazaar, then it wouldn't be too complicated:
One repo is public, open source, let's call it open-foobar
One repo is private, closed source, let's call it enterprise-foobar
In my opinion, it would be better to have the open repository as the main repository (upstream), which the private one forks and extends, but it depends on how much code do you want to keep private. If most of your commits will go into the private project, then it would make sense to make that one your main repository, however this will be harder to manage properly
Your local clone should add both repositories as remotes, and you should have separate local branches tracking branches on one of the two repositories, for example master-open tracks open-foobar/master, while master-enterprise tracks enterprise-foobar/master
You commit new open stuff in the master-open branch, which you push to the open repository
You merge master-open into master-enterprise, meaning that you include all the commits from the open repository in the private one
You commit new private stuff in the master-enterprise branch, which you push to the open repository, making sure that you don't accidentally merge these commits into the master-open branch.
When you want to include private commits into the open version, you can use git cherry-pick <commit-id> to just copy commits without exposing the fact that it comes from the private fork
This workflow is heavily centered on git, but it can be easily adapted to any other SCM system.
For the second part, by default patches from people that aren't paid by you under a contract that explicitly grants you all the copyright on the created code will keep the copyright to the original author of the code. This means that unless they grant you a license to the code, you can't include it in your private fork. Normally, that doesn't even grant you a license to include it in the open source project either, unless somehow mentioned in the license of the project, or in the patch submission process (by clicking the submit button you grant a license...). The Apache license is good here since it explicitly mentions that patches submitted to the project are automatically licensed so that they can be included in the code. I haven't read the zlib license, so I can't say if it has a similar clause or not; if it doesn't be sure to include some text that requires users to agree that they grant you the right to include any patch submission into the open source project under the project's license.
If you also want to include submitted patches into your private project, then you must ask for a copyright license on the code for inclusion in any private derivative of their code. See Wikipedia's article on CLAs for more details and some examples. You could take a look at Project Harmony for some "standard" CLAs.
The Apache group and others require code submitters to "sign" away ownership of any changes. You would need to do the same. Before you accept fixes or enhancements they have to sign the code over.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I wish to make a regular backup of my notes stored on my iPhone, iPad and Mac OS in the standard Notes.app. Unfortunately since Apple moved these from their standard IMAP format to a database format (and added a separate app) this is close to impossible.
I currently have over 200 notes and growing. I suppose they are stored in a standard database format and get synced to iCloud and pushed to all devices.
Notes seems to store its data in this path:
"Library/Containers/com.apple.Notes/Data/Library/Notes/"
If anyone of you can reliable read, and perhaps even backup/restore this database, then please comment.
There is an Apple KB article HT4910 that deals with this issue, which proves of little help. In fact their method complicates issues and is very unelegant for multiple backups.
Time Machine, Apple's own built-in backup solution is also of little help as it seems to skip backup and allow no restore for notes.
I'd be grateful if someone could peruse this and come up with solutions, which would be appreciated certainly by many of the growing community of iCloud users.
Ok, this is a somewhat incomplete answer but I wanted to post it so people may be able to contribute.
Using this LSOF command:
lsof -c Notes | GREP /Users/
I was able to figure out that most of the Notes.app data was being stored here:
/Users/USERNAME/Library/Containers/com.apple.Notes/Data/Library/Notes
In that folder there are three files (for me at least):
NotesV1.storedata
NotesV1.storedata-shm
NotesV1.storedata-wal
Which strangely enough pointed me in this direction:
https://superuser.com/questions/464679/where-does-os-x-mountain-lion-store-notes-data
I also found a SqlLite cache database here:
/Users/USERNAME/Library/Containers/com.apple.Notes/Data/Library/Caches/com.apple.Notes/Cache.db
Though investigating it with Sqlite3 only turned up a few un-interesting tables:
sqlite> .tables
cfurl_cache_blob_data cfurl_cache_response
cfurl_cache_receiver_data cfurl_cache_schema_version
If you need to get back your notes, you first disconnect from internet... Then copy your notes to a safe place (desktop).. Then delete the folder at your library and then copy the safe folder back to notes at library...
you can see one old file date..that is the main file you need to get back the notes.. You can delete the other 2 new dated ones as they are copy from icloud..
Now you can enjoy opening your note.app and you will see all your old notes are back
I was poking around trying to accomplish the same thing and found where the notes are stored in 10.8.5 Mountain Lion. It is very straight forward. The location is as follows:
/Users/(your user)/Library/Mail/V2/Mailboxes/Notes.mbox/(long number with hyphens)/Data/Messages/
The individual notes are stored in that location with a number.emlx name format.
if you copy the Notes.mbox, that should get them all.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
The project I am working on were are trying to come up with a solution for having the database and code be agile and be able to be built and deployed together.
Since the application is a combination of code plus the database schema, and database code tables, you can not truly have a full build of the application unless you have a database that is versioned along with the code.
We have not yet been able to come up with a good agile method of doing the database development along with the code in an agile/scrum environment.
Here are some of my requirements:
I want to be able to have a svn revision # that corresponds to a complete build of the system.
I do not want to check in binary files into source control for the database.
Developers need to be able to commit code to the continuous integration server and build the entire system and database together.
Must be able to automate deployment to different environments without doing a rebuild other than the original build on the build server.
(Update)
I'll add some more info here to explain a bit further.
No OR/M tool, since its a legacy project with a huge amount of code.
I have read the agile database design information, and that process in isolation seems to work, but I am talking about combining it with active code development.
Here are two scenario's
Developer checks in a code change, that requires a database change. The developer should be able to check in a database change at the same time, so that the automated build doesn't fail.
Developer checks in a DB change, that should break code. The automated build needs to run and fail.
The biggest problem is, how do these things synch up. There is no such thing as "checking in a database change". Right now the application of the DB changes is a manual process someone has to do, while code change are constantly being made. They need to be made together and checked together, the build system needs to be able to build the entire system.
(Update 2)
One more add here:
You can't bring down production, you must patch it. Its not acceptable to rebuild the entire production database.
You need a build process that constructs the database schema and adds any necessary bootstrapping data. If you're using an O/R tool that supports schema generation, most of that work is done for you. Whatever is not tool-generated, keep in scripts.
For continuous integration, ideally a "build" should include a complete rebuild of the database and a reload of static testing data.
I just saw that you have no ORM tool... here's what we had at a company I used to work for
db/
db/Makefile (run `make` to rebuild db from scratch, `make clean` to close db)
db/01_type.sql
db/02_table.sql
db/03_function.sql
db/04_view.sql
db/05_index.sql
db/06_data.sql
Arrange however necessary... each of those *.sql scripts would be run in order to generate the structure. Developers each had local copies of the DB, and any DB change was just another code change, nothing special.
If you're working on a project that already has a build process (Java, C, C++), this is second nature. If you're using scripts in such a way that there is no build process at all, this'll be a bit of extra work.
"There is no such thing as "checking in a database change"."
Actually, I think you can check in database change. The trick is to stop using simple -- unversioned -- schema and table names.
If you have a version number attached to a schema as a whole (or a table), then you can easily have a version check-in.
Note that database versions doesn't have fancy major-minor-release. The "major" revision in application software usually reflects a basic level of compatibility. That basic level of compatibility should be defined as "uses the same data model".
So app version 2.23 and 2.24 use the version 2 of a the database schema.
The version check-in has two parts.
The new table. For example, MyTable_8 is version 8 of a given table.
The migration script. For example MyTable_8 includes a MyTable_7 to MyTable_8 script which moves the data, providing defaults or whatever is required.
There are several ways this is used.
Compatible upgrades. When merely altering a table to add a column that permits nulls, the version number stays the same.
Incompatible upgrades. When adding non-null columns (that need initial values) or changing the fundamental shape of tables or data types of columns, you're making a big change and you have a migration script.
Note that the old data stays in place until explicitly dropped at the end of the change procedure. You have to run tests to assure that everything worked.
You might have two-part drop -- first rename, then (a week later) finally drop.
Make sure that your O/R-Mapping tool is able to build the necessary tables out of the default configuration it has and also add missing columns. This should cover 90% of your cases.
The other 10% are
coping with missing values for columns that where added after the data was inserted
write data-migration scripts for the rare case where you need to do more fundamental changes between versions
See the DBDeploy open source project. http://dbdeploy.com/
It allows you to check in database change scripts. It will then produce a consolidated change script including all changes that have not been applied.
The site describes the process pretty well.
This project is based on the techniques in the Martin Fowler article that was mentioned before. I was on the project that Martin based the article on. DbDeploy is a pretty good implementation of the process we used.
The migrations facility of Ruby on Rails was developed to handle exactly this need. If you're not using Rails for your application, you might see if this same concept has been ported to the framework of your choice, or read up on it and determine whether you could write some quick scripts that implement the same sort of functionality.
I have been struggling with versioning software for a while now.
I'm not talking about a naming convention, I'm talking about how to actually apply a version in a build system all the way through to a release.
I generally use major.minor.maintenance-[release type]
i.e. 1.0.2-rc1
The problem is managing the version number. I've tried many ways (sticking it in a build file, a properties file, a database, etc,etc) but I haven't found anything that really works well.
The closest thing I came up with is using Jira which I documented here:
http://blog.sysbliss.com/uncategorized/release-management-with-atlassian-bamboo-and-jira.html
I'm wondering if anyone has any good ideas about this.
Also, wondering how people handle releasing a version.... i.e. If I release/deploy version 1.0.0-rc1 do bugs found in this release then get logged into 1.0.0 (the next/production release).
Microsoft uses <major>.<minor>.<patch>-<build number> (or a variation).
I like using <major>.<minor>.<buildnumber>
Where I'm working we use the Maven system: artifact[-major-minor-revision][-SNAPSHOT] which allows us to develop "in progress" versions that change at a moments notice (SNAPSHOT) and those which have been formally released. Some examples are:
email-services-1.0.0-SNAPSHOT.jar
email-web-2.3.11.war
crm-2.5.0.ear
If it has SNAPSHOT in it then it hasn't passed the full suite of tests or is just a developer experiment. If it doesn't have SNAPSHOT then it is a release candidate. We maintain a repository of release candidates and the most recent is sent for deployment once the testers are happy with it.
All of this can be managed with a few simple entries in a build file under Maven. See Maven2 tutorial
This is probably a dead post now, but I'll add my two cents anyways. I'm of the opinion that build numbers should mean something to everyone who sees it. So I personally think that this is a good way to name versions:
major.minor.patch.revision - e.g. 1.1.4.2342
Major/minor numbers are pretty self-explanatory. But from the perspective of the 3rd number, it still needs to mean something to the customer. I've released this new version to you, Mr. Customer, but it wasn't worth a new minor number since we just fixed some bugs. So we've incremented the patch number.
The 4th number usually means absolutely NOTHING to the customer, so you might as well make it useful to you and anyone else in your company that sees it. So for us, that number is the SVN revision number. It tells us exactly which revision was responsible for that version so that we can pull it out any any time to recreate it. Branching code obviously achieves this too, but not to 100% certainty.
Also, another advantage with an all-numeric version number is that it easily integrates into nearly every continuous build system.
Anyways, that's my two cents.
+1 on the Jira/Bamboo solution. The only additional information about the build I would include (for my purposes) is the Subversion Release, although the Tagging operation is 80% of what I want.
Manually maintaining the release/version information is a royal pain. Letting JIRA drive it is a great idea.
On the final question, about where bugs/defects get logged and releasing a version:
Defect/Issue is logged against the release where it appears. A defect in 1.0.0-rc1 gets logged against 1.0.0-rc1
JIRA has (or maybe we added) a 'Fix-For' field that would have the planned release, in this case 1.0.0
If the defect/issue is severe enough, it may be necessary to add another 'rc' release.
The release is made when there are no outstanding critical defects/issues and the customer (or management) agrees that any remaining issues can be deferred
The beauty of managing this through JIRA is that adding releases, generating change-logs, etc. is automated fairly well.
We also use <major>.<minor>.<buildnumber> and we manage this with CruiseControl/(.Net) on our build server. And use Wix and CruiseControl Config to manage the Major minor numbers - still increment those by hand - but the build number happens automatically when on the build server. You could set up a rule an increment the major/minor automatically too I believe - we just have like to do that manually so that it takes concious thinking by a dev when it is time to name a particular release level.
Major.Minor.BuildDateNumber.DailyBuildNumber
Major and Minor are set by us, manually incrementing them as we see fit.
BuildDateNumber is the number of months since the project start multiplied by 100, plus the day number of the current month.
DailyBuildNumber is incremented for every build after midnight each day, starting at zero.
E.g. 4th build of release 5.2 on 10 July, where the project started 1 Jan that year, would have version number
5.2.710.3
This is all calculated for us by the Version task in Nant.
This keeps the version numbers unique and also allows us to quickly calculate when an installation was built.