I want to ask what is exactly the derived object in ClearCase and how is work.
Additional i want to ask if there is an other program with the same function, because in Git, MKS or in IBM® Rational Team Concert™ i cannot find something similar, is it obsolete ?
This is quite linked to dynamic views, which are very specific to ClearCase and not found in other more recent VCS.
See "ClearCase Build Concepts"
Developers perform builds, along with all other work related to ClearCase, in views. Typically, developers work in separate, private views. Sometimes, a team shares a single view (for example, during a software integration period).
As described in Developing Software, each view provides a complete environment for building software that includes a particular configuration of source versions and a private work area in which you can modify source files, and use build tools to create object modules, executables, and so on.
As a build environment, each view is partially isolated from other views. Building software in one view never disturbs the work in another view, even another build of the same program at the same time. However, when working in a dynamic view, you can examine and benefit from work done previously in another dynamic view. A new build shares files created by previous builds, when appropriate. This sharing saves the time and disk space involved in building new objects that duplicate existing ones.
You can (but need not) determine what other builds have taken place in a directory, across all dynamic views. ClearCase includes tools for listing and comparing past builds.
The key to this scheme is that the project team's VOBs constitute a globally accessible repository for files created by builds, in the same way that they provide a repository for the source files that go into builds.
A file produced by a software build is a derived object (DO). Associated with each derived object is a configuration record (CR), which clearmake or omake uses during subsequent builds to determine whether the DO can be reused or shared.
A derived object (DO) is a file created in a VOB during a build or build audit with clearmake or omake.
Each DO has an associated configuration record (CR), which is the bill of materials for the DO. The CR documents aspects of the build environment, the assembly procedure for a DO, and all the files involved in the creation of the DO.
The build tool attempts to avoid rebuilding derived objects.
If an appropriate derived object exists in the view, clearmake or omake reuses that DO.
If there is no appropriate DO in the view, clearmake or omake looks for an existing DO built in another view that can be winked in to the current view.
The search process is called shopping.
This is relevant for very large C or C++ makefile-based projects.
I think the TL;DR version of this is:
Derived objects contain information that describes
What was accessed to build the object, including dependencies that may bot be in your build files.
Other files created during the build process ("Sibling Derived Objects")
The commands used to build the object (The "build script") assuming that clearmake, omake, or the ANT listener were used to run the build.
In the case of clearmake and omake, this information is used to avoid rebuilds, potentially speeding builds. The lookup is referred to DO "shopping" and the build avoidance is "winkin."
If you have regulatory or security compliance or needs where this level of auditing is critical, there really isn't anything else that does this.
Related
I work on Labware LIMS, which has both configuration, and customization via its own programming language and internal code editor, and stores this customization code in database records. (Note, not the source code of the actual application itself, just the customization code a.k.a. LIMS Basic.) Almost everything in LIMS is stored in the database.
We want to investigate the possibility of using source control to protect this code but we don't know much more than the theory of using something like Git. (I have worked as a junior QA and used git but not as a dev and my knowledge is limited!)
Of particular use would be the merging tools, as currently we have to manually merge code in a text editor, if we even notice there is a conflict (checking content between dev and live is time consuming and involves using multiple tools, some of which are 3rd party tools we have developed ourselves, which are hit and miss. I personally find it easiest to cut and paste into a text file and then use Beyond Compare.
There is no notification that the code is different when moving it from dev to live (no deployment as such, you just import an xml file) so we often have things going live that someone was working on unbeknownst to each other. I.e. dev 1 is working on the code in object 1, dev 2 gets a ticket to make a change to object 1, does so and puts their change Live, whatever dev 1 was doing is now also Live in whatever state it was in. (Because we don't always have time to thoroughly check what state each object is in between up to 3 different databases.)
Is it possible to use source control just on the code within the database, but not necessarily the database itself? (We have backups and such for that but its easy for some aspects of the system to get overwritten by multiple devs working on overlapping areas at the same time.)
If anyone reading this has any specific knowledge of LW LIMS, we are referring to the Subroutines mostly, we have versioned Analyses which stands in for source control for the moment and is somewhat effective but no way to control who is doing what on the subroutines other than a comment log at the top. I have tried to find any information on how other teams source control their code in LIMS but to no avail.
The structure of one of these tables can range from as simple as the code just existing in one field as a straight text dump with a few other fields such as changed_on, changed_by and name (Subroutines), or more complex with code relating to one record being sprinkled around in multiple rows on another table entirely (Analyses) but even if it could just deal with the simple scenario to start with that would be great!
TL;DR: Could the contents of the Code field in a database record be treated like a regular code object in other dev environments somehow and source controlled using Git? (And is anyone willing to explain it simply for me to follow?)
As you need to version control table fields of subroutine, but LW LIMS doesn’t have the IDE for version control (such as git, svn etc). So the direct answer is no.
If you really want to do version control for the codes in database, you can create a git repository and only put the codes in git repository. when a file has updated, you can commit & push the changes. And it’s easy to compare the difference between versions.
More detail about git, you can refer git book.
LabWare LIMS has a number of options for version control. You COULD version the Subroutine table by adding a SUBROUTINE.VERSION field to the table, this works the same way as other versioned tables in LabWare where it asks you if you would like to create a new version of the object before saving. There are a few customers I work with that have done this.
Alternatively, (and possibly our more recommended method prior to LEM) there is the Snapshot capability where the system automatically takes a "snapshot" of objects as they are saved - when viewing these you have the ability to view them side by side in a comparison dialogue - it will show < or > for lines which are different.
Another approach is, if you have auditing turned on you are able to view the audit history for changes to specific objects - this includes subroutines.
One other approach is to use configuration packages - this has the ability to record version AND build numbers. Though individual subroutines is probably a bit too granular for it's intended design.
Lastly, since this question was originally posted we have developed a product called LabWare Environment Manager (LEM) which has some good change control functionality built-in.
For more information on the suggestions above, please have a look at the LabWare Technical manual for the version you are on. We also have a mailing list for questions like this to be posted. You might find an answer there. If you have access to our Support webpage you're able to search previous questions that have been asked. I'd also suggest that you get in touch with your Account Manager at LabWare who can help you answer some of your questions.
HTH
I am working with ClearCase UCM. We have a project that has been created as a 'single stream' Project Type.
We now wish this to have multiple child streams.
Is there a way to change this after creation? If so, how? Or does this need to be recreated?
I have looked into commands that change the project, but other than policies, I can't see if something may be related - and I can't see any related policy names - but this is fairly new to me, I just happen to have the most experience in my area.
The simple/multi-stream nature of an UCM project is determined at its creation with cleartool mkproj -model.
The "model" (simple or default) is not something you can change by policy or with cleartool chproj.
That is why the IBM help page on "Single-stream projects" says:
You may want to use a single-stream project during the initial stage of development when several developers want to share code quickly.
When the development effort expands and you need a parallel development environment, you can create a multiple-stream project based on the final baselines in the single-stream project.
We have two databases, A and B. A contains tables that should also be deployed to B; however, A should always be considered as the master of those tables. We don't want to duplicate the schema object scripts. We do not want to simply reference A's table from B - they need to be separate, duplicated tables.
As far as I can see, there are two ways to achieve this:
Partial projects: export the shared schema objects to a partial project (.files) file, and import it into the B's database project
Adding shared schema object files to the B's database project as links.
These both have the disadvantage that you need to explicitly specify files - you cannot specify a folder, meaning that any time a schema object that needs sharing is added to A's database project, then either the partial project export would need to be run again, or the new file added as a link to B's project.
What are the advantages and disadvantages of these techniques? Are there any better ways of achieving this that I may have missed? Thanks.
Partial Projects are not supported in the VS2012 RC release, which makes me think that I shouldn't use them. Furthermore, it appears that Composite Projects within SQL Server Data Tools (SSDT) may be the long term solution.
I've discovered that it is possible to link to entire folders by editing the dbproj file manually. For example:
<Build Include="..\SourceDatabase\Schema Objects\Tables\*.sql">
<SubType>Code</SubType>
<Link>Schema Objects\Tables\%(FileName).sql</Link>
</Build>
This works quite nicely, so it will be our preferred solution until we evaluate SSDT. The main drawback that I've found with it so far is that it will include all files in the source database's folders, including those that are not included in the source database project.
I'm thinking of Mask as in a circuit Mask (I think)- let me explain with a handy chart
The common source would be physically in c:\source
Instance A would be physically in c:\instanceA but initially have nothing but symlinks to everything in c:\source
Instance B would be physically in c:\instanceB but initially have nothing but symlinks to everything in c:\source
As you made changes to Instance A and Instance B, you would have create a mask that would hide files from CommonSource if they were deleted from the Instance folders and create a new physical file in the instance directory if an existing Common Source file was modified.
New files would live in the instance folders but never make it back to the Common Source.
This type of setup would be very useful for a project where I want to do many different types of small tweaks to multiple instances where distinct threads would work on distinct instances.
I know about symbolic links but they fall short in the case of modifying a file.
Is there anything that can accomplish this? If not, should I try to make this and patent it? Seems like a good idea to me.
I would be on Windows Server 2008 or later.
Fearing I'm stating the obvious, but git is one tool that can be used to achieve this behavior.
Make your "Common Source" a git repository
Clone the repository twice to "InstanceA" and "InstanceB"
In each instance, check out a new, unique branch
As changes are made in "Common Source" you can merge those changes into "InstanceA" and "InstanceB" while maintaining the "MASK" (changes to the branch) you've created for each.
This has the added benefit of allowing changes from "Common Source" to be pulled as you wish instead of having changes to "Common Source" pushed out to each instance (something I imagine would be less desirable and more prone to error).
You're looking for a union mount. Unfortunately, I'm not aware of any implementations for Windows, but there are several available for Linux, notably UnionFS.
In general they are used for making a read-only filesystem look like it's read-write: typically on live-CDs.
Since Windows 7 you can use libraries, which will allow you to include files from more than one physical location.
Windows 7 also include VirtualStore type of folder (for example, when creating or modifying a file in Program Files folder, it will actually be created in a user specific folder:
C:\Users\user\AppData\Local\VirtualStore. However - I don't know how you can create this type of folders yourself, and also, as far as I know, you can add and modify files, but not delete files in that way.
You'll want a versioning control system that supports per file checkout and permissions. Then you just need to set up a simple API converter that takes file-system commands and converts them to versioning control commands.
Delete -> disable permission to access file.
Directory commands should look for local copies and things you have permission to access.
Open -> grab local copy, on fail check-out file from repository.
Save -> disable permission, save local copy. //Avoid duplicates being seen.
Close without saving -> if permission to access from repository, delete local copy.
((By the way, this storage optimization seems somewhat spurious for versioning. Disk space is relatively cheap.
If your interest isn't in versioning, I'd suggest looking into separating out the information you would potentially want as volatile and creating configuration files for each branch. This, of course, requires a predictable pattern to the changes.))
IBM Rational ClearCase is version control system which does file-mask-like behaviour. It is known as MVFS: MultiVersion File System and can be mount to a workstation like a ordinary network drive.
ClearCase server (aka. VOB) you can store several versions of the same file, each on different code branch. The sets of files visible by user are called views. Each view has a configuration (aka. configuration specification), which defines what files and versions are visible for current user. Typical file looks like this:
# From wikipedia: http://en.wikipedia.org/wiki/IBM_Rational_ClearCase#Configuration_specifications
# Show all elements that are checked out to this view, regardless any other rules.
element * CHECKEDOUT
# For all files named 'somefile', regardless of location, always show the latest version
# on the main branch.
element .../somefile /main/LATEST
# Use a specific version of a specific file. Note: This rule must appear before
# the next rule to have any effect!
element /vobs/project1/module1/a_header.h /main/proj_dev_branch/my_dev_branch1/14
# For other files in the 'project1/module1' directory, show versions
# labeled 'PROJ1_MOD2_LABEL_1'. Furthermore, don't allow any checkouts in this path.
element /vobs/project1/module1/... PROJ1_MOD2_LABEL_1 -nocheckout
# Show the 'ANOTHER_LABEL' version of all elements under the 'project1/module2' path.
# If an element is checked out, then branch that element from the currently
# visible version, and add it to the 'module2_dev_branch' branch.
element /vobs/project1/module2/... ANOTHER_LABEL -mkbranch module2_dev_branch
While at home for personal projects i use Mercurial, at work we're using ClearCase.
I am attempting to run a few horizontal (touching lots of source files) refactorings in Visual Studio for the code base, however, for since each file is locked by ClearCase, it has to be unlocked and prompts for the actual activity that the check out is for.
In Mercurial, there's no such concept as far as i'm aware of: files are not being locked at all at any point of time!
Is there a way of doing such a refactoring, or any other operation that acts on multiple files, without having to check out each and every one manually?
In a DVCS (distributed VCS like Git or Mercurial), you simply cannot "lock" a file, since all the other repos wouldn't be aware of such a "status".
But with ClearCase and its locking mechanism (optimist with "unreserved checkout" or pessimist with "reserved checkout"), you need to make a checkout to tell ClearCase you will modify some files.
However, you could also, for large refactoring:
make and update a snapshot view
set all the files as writable (through an OS-based command, not through ClearCase "checkout")
perform your changes
search for all hijacked files and checkout/checkin those files then