What's currently the best way to version multiple subprojects of one root project in Bazaar?
I'm familiar with bzr-externals and scmproj. I'm more drawn to bzr-externals, since it allows to use builtin commands (I'm using Bazaar Explorer), however it seems to lack some features I would like.
My project looks like this:
CommonLibrary
ProjectA (uses CommonLibrary)
ProjectB (uses CommonLibrary)
However, I would like to be able to store some project-level files (and have them versioned too), because I'm working in Visual Studio (which needs solution file containing information for all its projects).
In almost all cases, I would like both projects to share the same common library.
Also, currently I'm versioning them all under 1 branch - however when I choose any of the above solutions, I will need to split them, but I don't know how (I would like to preserve all history).
What's best way to accomplish this?
In order to split your branch, look at the bazaar split command. It does the following: Split a subdirectory of a tree into a separate tree.
I also mainly use bzr-externals because it uses the builtin commands.
Related
I have a behave Project that has grown large and I am trying to tidy it up and manage the file system.
I can group my feature files and "module" files (files that do something) in a directory structure and my tests still run. However if i try and group my steps files together under different folders within the steps folder I get a not implemented step error.
Ideally I would like to be able to lay my project out as follows
Features
Component A
feature file 1
feature file 2
Component B
feature file 3
feature file 4
Steps
Common Given Steps
Common When Steps
Common Then Steps
Component A
Given Steps
When Steps
Then Steps
Component B
Given Steps
When Steps
Then Steps
Currently however if I lay the steps folder out like this the component A and B steps cannot be found.
Is it possible to do this in Py Behave or do I need to just leave my steps folder and only tidy up the other directories?
By default behave will ONLY look for step definitions in the root feature/steps directory - if you put your files in sub-directories then behave will not recognize them.
See also: https://github.com/behave/behave/issues/169
In my opinion this is a needless limitation of the framework, but sadly, that is how it works and it is working as expected - even if, arguably, it would be better off working differently.
You can get around this issue by organizing your step definitions into sub-directories and also importing those sub-directories in from a module that's loaded directly underneath feature/steps. See: https://github.com/behave/behave/blob/master/features/step.use_step_library.feature
Not my favorite workaround, but, it is a workaround.
Good Morning,
I am working on legacy code. This legacy code consists of multiple projects (language C with NI LabWindows CVI) and was never manged in a source control System but only in on folders. Over time it got a little messy and copies of this folder were created and changes were made to all folders depending the project that was built.
The result is, that there are 5 folders each containing different codes bases for what once was the same code. Also many files were modified in all folders because they are used in several projects. Each project was only build from 1 of the 5 folder (so project A was only build in folder 1, project b in Folder 4 etc.). It is not only raw code, but also user interface files.
I hope that was clear so far.
My task is to merge all the code into one one codes base (as it started of originally). And I would like to get some suggestions.
Here is the plan:
1. create baseline version of one folder that is supposedly that one with the most changes.
2. create GIT repository to store the code and all changes
3. go through all folders and merge files into baseline version using file diff software. (Folder 1 is baseline, merge folder 2 to baseline, merge folder 3 to baseline etc.)
Do you have any comments on this plan? What is good? Bad? Are there tools I can use?
This seems like as good of a plan as any. You have a mess on your hands either way.
If there are many changes to the user interface panels, that could be a headache. The UIR files are binaries, which will make git merges and diffs useless.
Go into each project and under Options->Preferences select the box to save .UIR files as .TUI files and save the project. This will give you a text file output describing the user interface and allow you to use diff tools properly.
EDIT
When the User Interface is active, you can directly select Options->Save in Text Format to do this as a one-off.
Good Luck!
It might also be worthwhile to use the UI to Code converter under CVI's Tools menu to convert all your UIRs to code. This should make them more compatible with text-based source control (like the save in text format approach), but may also ease the process of merging UIs.
I have just migrated my code from Perforce to TFS. Everything looks good but there are an issue which is Forward and Reverse integration from one solution to another. This is the show stopper for me.
There are two different solutions and 2 projects are common in both the solutions, but have different copies individually. So forward integration would be merging the whole application from sol1 to sol2 resolving the conflicts in common projects. After it gets resolved merge the code back to solution 1 (reverse integration). Point to be understood is only common projects need to be merged and everything else can be excluded.
Can similar setup be done in TFS?
Yes, this scenario is possible in TFVC, but not very common. You have a few options:
Create a branch root at the solution level and merge the files from solution one to solution two. As part of the merge operation exclude the files you don't want to merge. Later you can merge backwards and forwards at the folder level.
Create a folder relationship, but do not turn the folders into branch roots. This allows you to merge one folder with another folder at any time, but doesn't show these folders as branches per se
Create a branch root at each project level and merge each project individually. This has a couple of drawbacks (as you can't branch the whole solution in this case, as branch roots can't be nested).
Or you can approach the problem differently:
Create a separate solution that contains the common code and use package management (NuGet package publishing) to share the dependencies between both solutions (essentially creating 3 solutions).
Use workspace mappings to keep the common code in a single location in version control and map the code to different locations on disk. You can use compiler directives or configuration or different abstractions (interfaces, abstract classes) in code to compile the sources into different versions.
I got the exact solution which is more or less the first option Jesse you provided. Basically we need to create the common projects for one of the solutions and then branch them in another solution. At a later point in time we can merge them from solution 1 to 2 checkin the merged files in solution 2 and then merge from solution 2 to 1 and checkin the merged files in solution 1.
I would like to know your opinion about how you would organize the files/directores in a big web application using MVC (backbone for example).
I would make the following ( * ). Please tell me your opinion.
( * )
js
js/models/myModel.js
js/collections/myCollection.js
js/views/myView.js
spec/model/myModel.spec.js
spec/collections/myCollection.spec.js
spec/views/myView.spec.js
This is how I've traditionally organized my files. However, I've found that with larger applications it really becomes a pain to keep everything organized, named uniquely, etc. A 'new' way that I've been going about it is organizing my files by feature rather than type. So, for example:
js/feature1/someView.js
js/feature1/someController.js
js/feature1/someTemplate.html
js/feature1/someModel.js
But, oftentimes there are global "things" that you need, like the "user" or a collection of locations that the user has built. So:
js/application/model/user.js
js/application/collection/location.js
This pattern was suggested to me because then you can work on feature sets, package and deploy them using requirejs with relatively little effort. It also reduces the possibility of dependencies occurring between feature sets, so if you want to remove a feature or update it with brand new code, you can just replace a folder of 'stuff' rather than hunting for every file. Also, in IDE's, it just makes the files you're working on easier to find.
My two cents.
Edit: What about the spec files?
A few thoughts - you'll just have to pick the one that seems most natural to you I think.
You could follow the same 'feature folder' pattern with the spec files. The upside being that all of the specs are in one place. The downside is that now, much like what you're currently doing, you have to places for one feature's files.
You could put the specs in a 'spec' folder of the feature folder. The upside is that you now have actual packages that can be wrapped up in a single zip file with no chance of clobbering other work. It's also easier to find directly related files for writing tests - they're all in the same parent folder. The downside is that now your production code and test code is in the same folder, publishing it (possibly) to the world. Granted you'll probably end up compiling the production javascript down to one file at some point.. so I'm not sure that's much of an issue.
My suggestion - if this is a large application and you figure you're going to have a few hands touching the files, leave something like a 'package.json/yml/xml' file in the folder. In there, list out the production, spec, and any data files you need for testing (you can most likely write a quick shell script to do this for you). Then write out a quick script to look through your source folder for 'package.whateverYouChose' files, get the test files and then build your unit testing page with it. So, let's say you add another package.. run 'updateSpecRunner' or whatever you name the script, and it'll generate you another SpecRunner.html file (or whatever you named the file your running the specs on). Then you can manually test it in a browser, or automate it using phantomjs/rhino.
Does that make sense?
You can find a good example how to organize your application to this link
Backbone Jasmine examples
It looks more or less like your implementation.
I am using Team Foundation Server (TFS) 2008, and am preparing to merge several TFS projects together, and I'd like to do this in the best way possible. All of these projects are within a single TFS instance.
I merging projects because the product line that is contained in all of the projects is small and is worked on by a single, small team, and so the projects themselves are simply unnecessary. Thus, I am trying to simplify our structure by merging the projects together. What this amounts to, I think, is a need to move the files from all projects into just one of the projects. But, I want to do this without affecting file history, etc.
I have tried researching this, and have found the following resources:
Moving files from one Team Foundation Project to Another
Moving files between projects in Solution Explorer removes source
control history, breaking merge capabilities
The second resource, a Microsoft knowledge base article, actually looks pretty useful. But, before I dive into this, I am just wondering what advice and/or warnings the SO community might offer? I am just hoping to go into this with my "eyes wide open."
Moves within Solution Explorer (and Solution Navigator from Productivity Power Tools) a move of a solution item will lead to a delete and add in version control.
But you can also move items either within Source Control Explorer or using tf.exe's move command from the command line. The latter can, of course, be automated with in a script if there are many items to move.