Accessing environment variables in Clear Case configuration specification - clearcase

Is it possible to access environment variables in the config spec in clearcase.
I have this code:
element /folder/... /main/current_branch/LATEST
I wish to set my development up so as that I can update the branch by simply setting some envvars. I would like somthing like this to work is it possible?
element /folder/... /main/$current_branch/LATEST
where $current_branch should return the current branch set in that environment variable.

AFAIK, that is not possible.
The way I handle that is by having templates that I fill in (automatically). But I also use separate views; views are disposable and I rebuild my views routinely (every week, every couple of weeks, sometimes a few times in a day if I need to be sure of the cleanliness of the builds).
I'd show you my scripts but there are a large number of them, and they're fairly intricately intertwined with each other and with the working environment we have (multiple but overlapping VOBs for each of a number of major versions of a number of products, with some parts of the config spec provided by CM and custom preambles to identify what I'm working working on). We've been using ClearCase for about 18 years now.
The net result is a config spec for a bug fix branch that looks like:
# #(#)$Id:243260.jleffler.toru.cs,v 1.1 2011/08/30 15:23:02 jleffler Exp $
#
# Config Spec for Bug 243260 - Blah, blah, blah, blah
element * CHECKEDOUT
element * .../TEMP.243260.jleffler/LATEST
mkbranch -override TEMP.243260.jleffler
#time 26-Jul-2009.00:00:00UTC-08:00
element /vobs/main_vob/... /main/LATEST
element /vobs/other_vob/... dist.1.00 -nocheckout
include /atria/cspecs/product/1.23/product-1.23.4
#include /atria/cspecs/product/1.16/product-1.16.8
element * /main/LATEST
The bit between the commented out time stamp and the catch-all rule is provided by CM. The bit above the time stamp is custom to the branch (TEMP.243260.jleffler — which identifies it as a temporary branch, the bug fix which it is for, and who is doing the work). The template actually lists about 10 different config specs from CM, and I just delete the ones that aren't relevant. The view name is based on the bug number, my login, and the machine where it's created (toru). I've disguised most of the rest, but it is based on a bug cspec that I created earlier today. My bug.view script took the bug number, a description, the path for the view working storage, and the VOBs where I needed the branch created and went off and set everything up automatically. (And I'm still archaic enough to use RCS to keep my cspecs under control.)
Some of my views last a long time (by name). For example, the current release reference view will survive for the 5+ years that the release will be supported. It'll be rebuilt hundreds of times over that period, but the name remains the same: prod-1.23-ref.jleffler.toru. So the cspec for that will change over time, as different work is needed, but the basic cspec is three lines — CHECKEDOUT, include standard CM provided configuration file, and LATEST.

No, I never saw a config spec based on environment variable.
I looked at config_spec man page, writing config spec and "How config spec works": none refer to that possibility.
For dynamic view, I saw script modifying the config spec dynamically, based on an environment variable, using cleartool setcs (since the refresh would be near instantaneous with a dynamic view).
Note: don't forget that your current_branch might not always derive directly from /main. I prefer using the syntax:
element /folder/... .../my_branch/LATEST
in order to select my_branch, without depending on its direct "parent" branch (even though, in base ClearCase, there is no real "parent" branch).

Related

Polarion document baseline with links in description field

I have a generic LiveDoc in Polarion which contains a series of referenced requirements. Recently I started to insert links into the description of some of the requirements to make it easier to navigate from one requirement to another. However, I've discovered that when I baseline the document the links in the description don't get updated to point to the baselined version of the requirement, but the links (to the same requirement) in the Linked Work Items section are updated to include the baseline revision.
Is there a way to get the links in the description to point to the baselined revision like the ones in the Linked Work Items section?
I'm using Polarion 21 R1 if that matters.
Thanks in advance for your help.
Interesting approach. I doubt you get this working 100%. HTML is notoriously hard to parse(complete and correctly), so you should avoid this workflow.
Use Linked Workitems instead and use the new Collection Feature, which most probably does what you need.
Also while it is possible to link to older / specific revisions (of artefacts) in Polarion, I never found a scenario which was maintainable and useful in same time.
Note that Revisions get big very fast (5-7 digits). Comparing or updating these links is very error prone and demanding work, full of devastating pitfalls.
We follow the approach to keep items unchanged after release and create new items instead of changing existing ones. We have then more WIs but Polarion's UI (and most peoples head) can deal with large number of WIs better than with versioned links.

ClearCase config spec for nested branching

We have a VOB where code development is mostly done in the main branch. At a certain point in time, it's time to work on some new features that are closely related to each other. For this, we created a new branch, some_feature_set. Multiple developers work on this feature set. Each developer works in an own branch and once a certain sub-feature is deemed finished, it gets merged back into some_feature_set. Once the feature set is fully implemented, the plan is to merge it into main.
To achieve this, we use config specs like this one:
element * CHECKEDOUT
element * /main/some_feature_set/some_sub_feature/LATEST
element * /main/some_feature_set/LATEST -mkbranch some_sub_feature
element * /main/LATEST -mkbranch some_feature_set
Since work for some_sub_feature is intended to be merged into some_feature_set, our idea was to already branch from some_feature_set before creating the task branch.
Our organization uses dynamic views (and we can't change this). In order to protect ourselves from changes that other developers make to the main and some_feature_set branches which might break ongoing work in the sub-feature branch, we use timestamps. A config spec would therefore look like this:
element * CHECKEDOUT
element * /main/some_feature_set/some_sub_feature/LATEST
mkbranch some_sub_feature
element * /main/some_feature_set/LATEST -time <some_time>
mkbranch some_feature_set
element * /main/LATEST -time <some_time>
end mkbranch
end mkbranch
This causes issues when checking out a file from main. ClearCase will branch it to some_feature_set, but since there is no rule to select the newly created version, it will try to branch again and issue an error that the branch exists. This we can fix by adding more rules to the config spec:
element * CHECKEDOUT
element * /main/some_feature_set/some_sub_feature/LATEST
mkbranch some_sub_feature
element * /main/some_feature_set/LATEST -time <some_time>
element * /main/some_feature_set/0
mkbranch some_feature_set
element * /main/LATEST -time <some_time>
element * /main/0
end mkbranch
end mkbranch
This way we don't get any issues when checking out files or adding new files to ClearCase. The issue we do get, though, is that when another developer wants to do some work for the some_feature_set branch for a file that only had the main branch and checks this file out, the version selected by the view will change.
Let's say, for example, that with the config spec listed above, version /main/4 gets selected for some_file in my view. Work continues in parallel and version /main/5 is created by a different developer. The time rule in the config spec will still select version /main/4. At some later point in time, yet another developer has to do some work for some_feature_set and sets up an own view with a similar config spec but with a newer timestamp, such that some_file gets version /main/5 selected. This developer has to make some changes to some_file and checks it out. This immediately creates versions /main/some_feature_set/0 and /main/some_feature_set/some_other_sub_feature/0. Because /main/some_feature_set/0 now exists, my view selects it. It's contents are the same as /main/5 and not /main/4 as was the case before the other developer checked out the file.
Is there anything that can be done to prevent the issue described above from happening?
First, one branch per developer for developing the same feature is not the best practice. I have long advocated against that (since 2009).
But if you must, and want sub-branches, it is far more effective to create them from label, instead of relying on time.
And it is best to not force the branch path (it becomes too finicky, as your question illustrates)
I have uses time-based selection rules in "ClearCase : Loading Older Version of a specific Directory?".
But you will see the rule for new element is both simpler and appear sony once:
element * /main/0 -mkbranch myBranch
You need to specify, for a new element, that you want it created directly in the right branch.
Which is why branch-based selection rules use generally the ellipsis notation ..., as in .../myBranch. See "Details of config spec in base ClearCase".
The general idea is: you should not care from which branch a new branch is created as long as its starting version is the right one (ie, the one with the right immutable label).

mkelem on branch only

I am using ClearCase 7.1.2 and am working on a project. At some point long before I came along there was a branch (lets call it 'pilot') which eventually became the production code while the main tree was left behind. At this point I need to make a branch off of pilot to implement a new feature but am running into a problem.
I have made my branch and all seems to go well when committing changes to existing files but when I do a mkelem the new file ends up on main. I want it on pilot. What would the config spec look like for this or what combination of commands can I piece together to make this a reality?
Right now my config spec looks something like this (going from memory):
elements * CHECKEDOUT
elements * main/0 -mkbranch pilot
elements * main/pilot
elements * main/LATEST
I'll update once I can see what I have going on there.
You will find examples on the config_spec man page, as well as Config spec rules for elements in subbranches.
What you need to do is first put a label on pilot branch (ie, on all the element present in your view on pilot branch), in order to make new versions from a fixed point in time.
Then:
elements * CHECKEDOUT
elements * .../my_feature_branch/LATEST
elements * LABEL_ON_PILOT -mkbranch my_feature_branch
elements * main/LATEST -mkbranch my_feature_branch
Trying to make branch from the "LATEST" versions of another branch is really not recommended: you simply don't know from what you are working on on your new branch.
I think I'd be expecting to see a cspec like this:
elements * CHECKEDOUT
elements * .../my_feature_branch/LATEST
elements * .../pilot/LATEST -mkbranch my_feature_branch
elements * /main/LATEST -mkbranch my_feature_branch
The -mkbranch on the last line answers your question. Line 2 ensures you use your feature branch when it exists. The change on line 3 should work better than your line 2 (unless CC 7.1.2 has some new abbreviations which allows your old version to work; I seem to be using 7.0.x).
Treat this answer of mine with some caution - see the answer by VonC for an alternative way of doing this. Clearly, there is some issue which VonC sees with this approach. However, the team I work on has been doing precisely this for many years (since about 1994) without running into the issues which have VonC so up in arms. In addition, it takes about 12 hours to apply a full label to the set of VOBs which make up our product set (somewhere around a dozen large multi-site VOBs, at a guess). I checked the time with our CC guru, and he commented that we won't be migrating to UCM any time soon, in part because of this labelling issue.
So, best modern practice with small VOBs may use a label created specifically as a starting point for a feature branch, but not all systems use them. Empirically, the label is not necessary. I'm not sure what other 'best practices' (or 'worst practices that compensate for the lack of best practices') we have that prevent us running into problems.

App Engine Memcache Key Prefix Across Versions

Greetings!
I've got a Google App Engine Setup where memcached keys are prefixed with os.environ['CURRENT_VERSION_ID'] in order to produce a new cache on deploy, without having to flush the cache manually.
This was working just fine until it became necessary for development to run two versions of the application at the same time. This, of course, is yielding inconsistencies in caching.
I'm looking for suggestions as to how to prefix the keys now. Essentially, there needs to be a variable that changes across versions when any version is deployed. (Well, this isn't quite ideal, as the cache gets totally blown out.)
I was thinking of the following possibilities:
Make a RuntimeEnvironment entity that stores the latest cache prefix. Drawbacks: even if cached, slows down every request. Cannot be cached in memory, only in memcached, as deployment of other version may change it.
Use a per-entity version number. This yields very nice granularity in that the cache can stay warm for non-modified entities. The downside is we'd need to push to all versions when models are changed, which I want to avoid in order to test model changes out before deploying to production.
Forget key prefix. Global namespace for keys. Write a script to flush the cache on every deploy. This actually seems just as good as, if not better than, the first idea: the cache is totally blown in both scenarios, and this one avoids the overhead of the runtime entity.
Any thoughts, different ideas greatly appreciated!
The os.environ['CURRENT_VERSION_ID'] value will be different to your two versions, so you will have separate caches for each one (the live one, and the dev/testing one).
So, I assume your problem is that when you "deploy" a version, you do not want the cache from development/testing to be used? (otherwise, like Nick and systempuntoout, I'm confused).
One way of achieving this would be to use the domain/host header in the cache - since this is different for your dev/live versions. You can extract the host by doing something like this:
scheme, netloc, path, query, fragment = urlparse.urlsplit(self.request.url)
# Discard any port number from the hostname
domain = netloc.split(':', 1)[0]
This won't give particularly nice keys, but it'll probably do what you want (assuming I understood correctly).
There was a bit of confusion with how i worded the question.
I ended up going for a per-class hash of attributes. Take this class for example:
class CachedModel(db.Model):
#classmethod
def cacheVersion(cls):
if not hasattr(cls, '__cacheVersion'):
props = cls.properties()
prop_keys = sorted(props.keys())
fn = lambda p: '%s:%s' % (p, str(props[p].model_class))
string = ','.join(map(fn, prop_keys))
cls.__cacheVersion = hashlib.md5(string).hexdigest()[0:10]
return cls.__cacheVersion
#classmethod
def cacheKey(cls, key):
return '%s-%s' % (cls.cacheVersion(), str(key))
That way, when entities are saved to memcached using their cacheKey(...), they will share the cache only if the actual class is the same.
This also has the added benefit that pushing an update that does not modify a model, leaves all cache entries for that model intact. In other words, pushing an update no longer acts as flushing the cache.
This has the disadvantage of hashing the class once per instance of the webapp.
UPDATE on 2011-3-9: I changed to a more involved but more accurate way of getting the version. Turns out using __dict__ yielded incorrect results as its str representation includes pointer addresses. This new approach just considers the datastore properties.
UPDATE on 2011-3-14: So python's hash(...) is apparently not guaranteed to be equal between runs of the interpreter. Was getting weird cases where a different app engine instance was seeing different hashes. using md5 (which is faster than sha1 faster than sha256) for now. no real need for it to be crypto-secure. just need an ok hashfn. Will probably switch to use something faster, but for now i rather be bugfree. Also ensured keys were getting sorted, not the property objects.

How best to branch in Clearcase?

I've previously documented my opinions on Clearcase as a source control system, but unfortunately I am still using it. So I turn to you guys to help me alleviate one of my frustrations.
We have just moved from a one-branch-per-developer system, to a one-branch-per-task in an attempt to improve some of the issues that we've been having with determining why certain files were changed. Generally I am happy with the solution, but there is one major issue. We are using simple scripts to start and end tasks which create a new branch named with the username and task number and then updates the local snapshot view to have a config spec similar to the following:
element * CHECKEDOUT
element * .../martin_2322/LATEST
element * /main/LATEST -mkbranch martin_2322
load /Project/Application
Let's say that my project has two coupled files A.cs and B.cs. For my first task I make changes to A on the branch. Then I need to stop working on task 2322 for whatever reason and start work on task 2345 (task 2322 is not finished, so I don't merge it back into main).
I create a new task branch 2345, edit both A.cs and B.cs and merge the results back into main. Now I go back to work on 2322, so I change my config spec back to one defined above. At this point I see the A.cs file from the task branch (as I edited it earlier, so I get the version local to that branch) and the latest version of B.cs from main. Since I don't have the changes made to A.cs on the 2345 branch the build breaks. What I need instead is to be able to pick up task 2322 from where I left off and see it with the old version of A.cs - the one that was latest in main when the branch was created.
The way I see it I have a few options to fix this:
Change the config spec so that it gets files from main at the right date. This is easy enough to do if I know the date and don't mind setting it by hand, but I can't figure out how to automate this into our task switching scripts. Is there anyway to get the creation date of a branch?
Create a label for each branch on main. Theoretically simple to do, but the labelling system in our install of CC is already collapsing under the weight of a few hundred labels, so I don't know if it will cope with one per developer per branch (notice that the task in my example is 2322 and we're only about an quarter of the way through the project)
Merge out from main into the task branch. Once again should work, but then long running branches won't just contain the files changed for that task, but all files that needed to be merged across to get unrelated things working. This makes them as complicated as the branch-per-developer approach. I want to see which files were changed to complete a specific task.
I hope I'm just missing something here and there is a way of setting my config spec so that it retrieves the expected files from main without clunky workarounds. So, how are you guys branching in Clearcase?
A few comments:
a branch per task is the right granularity for modifying a set of file within a "unit of work". Provided the "task" is not too narrow, otherwise you end up with a gazillon of branches (and their associated merges)
when you create a config spec for a branch, you apparently forget the line for new elements (the one you "add to source control")
Plus you may consider branching for a fix starting point, which would solve the "old version of A.cs - the one that was latest in main when the branch was created" bit.
I know you have too much labels already, but you could have a script to "close" a task which would (amongst other things) delete that starting label, avoiding label cluttering.
Here the config spec I would use:
element * CHECKEDOUT
element * .../martin_2322/LATEST
element * STARTING_LABEL_2322 -mkbranch martin_2322
# selection rule for new "added to source control" file
element * /main/0 -mkbranch martin_2322
load /Project/Application
I would find this much more easier than computing the date of a branch.
do not forget you can merge back your task to main, and merge some your files from your finished task branch to the your new current task branch as well, if you need to retrofit some your fixes back to that current task as well.
You can get the creation date for a branch by using the describe command in cleartool.
cleartool describe -fmt "%d" -type martin_2322
This will printout the date and time that the branch was created. You can use this to implement your first option. For more information, you could read the following cleartool man pages, but hopefully the above command is all you need.
cleartool man describe
cleartool man fmt_ccase
We use Clearcase, and we find that creating a branch for a release is often much easier than doing it by task. If you do create it by task, then I'd have a 'main branch' for that release, and branch the tasks off that branch, and then merge them back in when finished to merge them back to the trunk.

Resources