I'm thinking of Mask as in a circuit Mask (I think)- let me explain with a handy chart
The common source would be physically in c:\source
Instance A would be physically in c:\instanceA but initially have nothing but symlinks to everything in c:\source
Instance B would be physically in c:\instanceB but initially have nothing but symlinks to everything in c:\source
As you made changes to Instance A and Instance B, you would have create a mask that would hide files from CommonSource if they were deleted from the Instance folders and create a new physical file in the instance directory if an existing Common Source file was modified.
New files would live in the instance folders but never make it back to the Common Source.
This type of setup would be very useful for a project where I want to do many different types of small tweaks to multiple instances where distinct threads would work on distinct instances.
I know about symbolic links but they fall short in the case of modifying a file.
Is there anything that can accomplish this? If not, should I try to make this and patent it? Seems like a good idea to me.
I would be on Windows Server 2008 or later.
Fearing I'm stating the obvious, but git is one tool that can be used to achieve this behavior.
Make your "Common Source" a git repository
Clone the repository twice to "InstanceA" and "InstanceB"
In each instance, check out a new, unique branch
As changes are made in "Common Source" you can merge those changes into "InstanceA" and "InstanceB" while maintaining the "MASK" (changes to the branch) you've created for each.
This has the added benefit of allowing changes from "Common Source" to be pulled as you wish instead of having changes to "Common Source" pushed out to each instance (something I imagine would be less desirable and more prone to error).
You're looking for a union mount. Unfortunately, I'm not aware of any implementations for Windows, but there are several available for Linux, notably UnionFS.
In general they are used for making a read-only filesystem look like it's read-write: typically on live-CDs.
Since Windows 7 you can use libraries, which will allow you to include files from more than one physical location.
Windows 7 also include VirtualStore type of folder (for example, when creating or modifying a file in Program Files folder, it will actually be created in a user specific folder:
C:\Users\user\AppData\Local\VirtualStore. However - I don't know how you can create this type of folders yourself, and also, as far as I know, you can add and modify files, but not delete files in that way.
You'll want a versioning control system that supports per file checkout and permissions. Then you just need to set up a simple API converter that takes file-system commands and converts them to versioning control commands.
Delete -> disable permission to access file.
Directory commands should look for local copies and things you have permission to access.
Open -> grab local copy, on fail check-out file from repository.
Save -> disable permission, save local copy. //Avoid duplicates being seen.
Close without saving -> if permission to access from repository, delete local copy.
((By the way, this storage optimization seems somewhat spurious for versioning. Disk space is relatively cheap.
If your interest isn't in versioning, I'd suggest looking into separating out the information you would potentially want as volatile and creating configuration files for each branch. This, of course, requires a predictable pattern to the changes.))
IBM Rational ClearCase is version control system which does file-mask-like behaviour. It is known as MVFS: MultiVersion File System and can be mount to a workstation like a ordinary network drive.
ClearCase server (aka. VOB) you can store several versions of the same file, each on different code branch. The sets of files visible by user are called views. Each view has a configuration (aka. configuration specification), which defines what files and versions are visible for current user. Typical file looks like this:
# From wikipedia: http://en.wikipedia.org/wiki/IBM_Rational_ClearCase#Configuration_specifications
# Show all elements that are checked out to this view, regardless any other rules.
element * CHECKEDOUT
# For all files named 'somefile', regardless of location, always show the latest version
# on the main branch.
element .../somefile /main/LATEST
# Use a specific version of a specific file. Note: This rule must appear before
# the next rule to have any effect!
element /vobs/project1/module1/a_header.h /main/proj_dev_branch/my_dev_branch1/14
# For other files in the 'project1/module1' directory, show versions
# labeled 'PROJ1_MOD2_LABEL_1'. Furthermore, don't allow any checkouts in this path.
element /vobs/project1/module1/... PROJ1_MOD2_LABEL_1 -nocheckout
# Show the 'ANOTHER_LABEL' version of all elements under the 'project1/module2' path.
# If an element is checked out, then branch that element from the currently
# visible version, and add it to the 'module2_dev_branch' branch.
element /vobs/project1/module2/... ANOTHER_LABEL -mkbranch module2_dev_branch
Related
I need to modify a Simulink project stored under clearcase. From this project I must generate the C code, but this not the problem. The problem is that all generated files (*.c and other) are saved into clearcase and the code generation delete some file without overwrite the old version with new. Fortunately seems only for files different from *.C but in any case under clearcase (I use a windows client) I found in correspondence of deleted file:
the file name
three colored question marks
I think that clearcase has the information regarding the file stored but is not able to allocate this.
Now I need a command/script for CC which help me to found ALL removed files from the view because the project structure is very complex and a manually search is hard.
Thanks for any suggestion
"three colored question" marks means "checked out but removed", as in this example (you can recover from it by reloading the snapshot view)
If an automatic process is generating or deleting files in a snapshot view (it wouldn't be able to do the same in a dynamic view), then you should end up with a bunch of hijacked files (as identified in a snapshot view).
You could check them out and check them in.
For the files that need to be deleted, you can follow "What's the “proper” way to delete files from a ClearCase snapshot?".
But both process are manual and doesn't scale well.
There are two viable options:
1/ Don't version what is generated (you can re-generate it at any time)
2/ If you must version what is generated, then:
generate it outside of the snapshot view
use clearfsimport to import the result of that generation into the snapshot view: that will checkout the right files and will delete the files that are no longer generated.
That would be the right solution for "w I need a command/script for CC which help me to found ALL removed files"
I need some help on how to collect all information from ClearCase and tar or zip it, and store it in a provided space. we have migrated major baselines from ClearCase to different SCM tool .But we still have ClearCase. we want to capture all version, change, baseline, etc (basically capture everything but not the SCM tool itself) and zip it or put it in a flat file or so. this is just for historical purposes, so that tomorrow if someone wants to know what was in the ClearCase then they can see. so ,is there any idea?
The reason this doesn't exist (as far as I know) is in the nature of ClearCase (compared to a revision-based VCS tools).
It is a file-based VCS:
You create a new version for each file you change (instead of a unique repository-wide revision)
You create a label on each file you want to label (instead of a tag referring to a revision or a commit)
You create a branch for each file modified in that branch (instead of a single directory for SVN, or branch for other VCS)
...
That means you wouldn't simply export revisions/labels/branches with ClearCase. You would export them for each file: it doesn't scale well and would take too much time and space.
Migrating major baselines is sensible course of action that I have recommended before.
But for the rest, I always put a ClearCase instance as a way to explore the full history/events in case in is needed, while the recent history is managed in the new VCS tool.
Storing that as a flat file you could read without ClearCase isn't, again as far as I know, available.
Hence my previous "vobstore-reformatvob" proposition.
I'm wondering if it is possible to create project-specific files in Clearcase. What I want to do is create files in one project, use Clearcase to source control the files, but I don't want those files to leave that porject because they don't have applicability in any other project.
For example:
I want to manage database changes in Clearcase. I plan on having 3 folders in each project (projects are created for each release of the software). The folders are "install", "update", and "backout". The install folder contains the scripts needed to build a database from scratch for the stream that I'm working in, let's say the stream is in project "13.03". The "update" and "backout" folders contain scripts needed to update and backout the changes to bring the database from 13.02 to 13.03, and vice versa.
In the 13.04 project, I'll have the same folder structure, but I don't want the contents of the "update" and "backout" folders in my 13.04 because I'll have other files that will bring the database from 13.03 to 13.04.
So what I'm looking to do is essentially create "project-specific" files/folders in Clearcase.
I'd gladly take any other recommendation for managing database changes in Clearcase. Keep in mind that the 13.03 and 13.04 (for example) baselines could be being developed at the same time.
It seems you are referring to the same project, with different versions (13.02, 13.03, ...).
If that is the case:
simply update your 3 folder according to the current version
put a baseline (if we are talking about ClearCase UCM) on the component representing your project
if evolutions needs to be done on any file of a specific version, make a child Stream called, for instance, "13.03", and update your "13.03" folders there. They will evolve in complete isolation in their own dedicated "13.03" branch.
If you have to create a new directory for each project version (which means you don't need a source control system at all, just a simple backup system), then you have no choice but to recreate each of those folders with their appropriate files in them, making new "add to source control".
While at home for personal projects i use Mercurial, at work we're using ClearCase.
I am attempting to run a few horizontal (touching lots of source files) refactorings in Visual Studio for the code base, however, for since each file is locked by ClearCase, it has to be unlocked and prompts for the actual activity that the check out is for.
In Mercurial, there's no such concept as far as i'm aware of: files are not being locked at all at any point of time!
Is there a way of doing such a refactoring, or any other operation that acts on multiple files, without having to check out each and every one manually?
In a DVCS (distributed VCS like Git or Mercurial), you simply cannot "lock" a file, since all the other repos wouldn't be aware of such a "status".
But with ClearCase and its locking mechanism (optimist with "unreserved checkout" or pessimist with "reserved checkout"), you need to make a checkout to tell ClearCase you will modify some files.
However, you could also, for large refactoring:
make and update a snapshot view
set all the files as writable (through an OS-based command, not through ClearCase "checkout")
perform your changes
search for all hijacked files and checkout/checkin those files then
It seems a good and clean thing to ensure that your deployed files appear on the target system with a consistent time/date. Many Applications seem to do this but other than for care of overwriting Users' existing data I guess it has no real significance. I'm having a purge on my installer packaging and I'd like to know if there any good reasons for specific date/time handling.
Windows Installer uses the timestamp information on files (in some situations) in addition to the REINSTALLMODE property to determine which files should be updated (e.g. "o" means only replace a file if the existing copy is older).
I believe the default behavior is to leave the files timestamped for their original creation (not their installation onto the user's machine). Unless you have a specific reason to do something different, I would follow the default behavior.