Centralized Config Spec for ClearCase Snapshot Views - clearcase

We have a config spec that we use for our builds that we encourage all developers in our organization to use so that they can run any task in our build without fear of failure. Every now and again we need to update that config spec to include new elements or exclude old elements.
When we do this, the process is to write a quick mail to all of our developers telling them to manually update any views that they use to build our system with the current config spec.
This is annoying and error prone and thus leads many developers to just ignore those mails and then we get called because the build's broken.
I'm very interested in defining the config spec centrally somehow so that all views can use that config spec and we can update it underneath of people. This may seem draconian but when you have hundreds of developers and they're all supposed to be running the same builds, it seems to make sense.
I've already investigated the idea of using a share to store the config spec and then include it in the developer's views using an include line, but as the documentation states: "Include files are re-read on each execution of setcs and edcs." This appears in testing to mean what it seems to mean, that the only time the rules are re-evaluated are in the context of editing the config spec in some way.
The solution I'm looking for would re-evaluate the config spec every time you interact with clearcase, or at the very least update. In that way, I could manage the config spec for everyone.
Thoughts?

I can work, especially if your included config spec doesn't change too often.
Each time it will change, your users will have to run
cleartool setcs -current
(as explained in the example#2 of this technote)
You will then need to decide where to store that common config spec:
on a share drive
on a ClearCase view in order to benefit from the history feature for that common config spec content.
You can see a full debate in this thread:
However, I have encountered situations where a version controlled
include file was necessary because it referred to plenty of elements
from a legacy code which users had to use to continue their work on some
of the new code. It was pain and we had to live with it.
Just like with any other 'process', this too needs some 'education' to
the users.

Related

Flush Agent doesnt clear proxy clientlib paths

I am using AEM 6.3 and using allowProxy for clientlibs. As expected dispatcher caches the clientlibs under path /cache/etc.clientlibs/myapp/clientlibs/clientlib.css. But corresponding jcr path will be /apps/myapp/clientlibs/clientlib/mystyle.css
So when clientlibs are modified during deployment, and published, they wont clear respective apache cache automatically. Today we are doing this manually.
Plus we use automated cache buster VersionedClientlibs. So we never end up loading obsolete clientlib. But apache cache gets piled up with 1000s of obsolete clientlib files if manual clearance is not done.
What is the recommended approach to clear obsolete clientlibs at apache that is versioned and proxy allowed?
This a known limitation, and we've also been flushing the whole /etc.clientlib path after each deployment. we do this via ACS dispatcher-flush-ui.
Typically, when deploying to production, you'd flush the whole or part of the dispatcher cache anyway to make sure component changes reflect. So adding this task to that process is easy.
If you really want this to become an automatic process, you can:
Write a ResourceChangeListener example here or a a JCR EventListener example: here. And basically listen for changes at the clientlib path and replicate the corresponding /etc.clientlibs/ path
Write a ReplicationPathTransformer so that when a your clientlib path is replicated, you can transform it to the corresponding /etc.clientlib/ path to be flushed in dispatcher.
Hope this helps.

ClearCase UCM: Change Project Type?

I am working with ClearCase UCM. We have a project that has been created as a 'single stream' Project Type.
We now wish this to have multiple child streams.
Is there a way to change this after creation? If so, how? Or does this need to be recreated?
I have looked into commands that change the project, but other than policies, I can't see if something may be related - and I can't see any related policy names - but this is fairly new to me, I just happen to have the most experience in my area.
The simple/multi-stream nature of an UCM project is determined at its creation with cleartool mkproj -model.
The "model" (simple or default) is not something you can change by policy or with cleartool chproj.
That is why the IBM help page on "Single-stream projects" says:
You may want to use a single-stream project during the initial stage of development when several developers want to share code quickly.
When the development effort expands and you need a parallel development environment, you can create a multiple-stream project based on the final baselines in the single-stream project.

Is there an automated way to document Nancy services?

Is there any way to auto-generate Swagger documentation (or similar) for a Nancy service?
I found Nancy.Swagger, but there's no information on how to use it and the demo application doesn't seem to demonstrate generating documentation (if it does, it's not obvious).
Any help would be appreciated. Thanks!
In my current project I've been looking a lot into this problem. I used both nancy.swagger and nancy.swagger.attributes.
I quickly discarded Nancy.swagger, because for me personally it doesn't sound right that you have to create a pure documentation class for each nancy module. The attributes solution was a bit "cleaner" - at least codebase and documentation were in one place. But very fast this became unmaintainable. Module code is unreadable because of many attributes. Nothing is generated automatically: you have to put path, all parameters, even http method as an attribute. This is a huge effort duplication. Problems came very fast, a few examples:
I changed POST to PUT in Nancy and forgot to update [Method] attribute.
I added a parameter but not the attribute for it.
I changed parameter from path to query and didn't update the attribute.
It's too easy to forget to update the attributes (let alone documentation module solution), which leads to discrepancies between your documentation and actual code base. Our UI team is in another country and they had some trouble using the APIs because docu just wasn't up-to-date.
My solution? Don't mix code and documentation. Generating docu from code (like Swashbuckle does) IS ok, but actually writing docu in code and try to dublicate the code in docu is NOT. It's not better than writing it in a Word document for your clients.
If you want Swagger docu, just do it the Swagger way.
- Spend some time with Swagger.Editor and really author your API in
YAML. It looks all-text and hard, but once you get used to it, it's
not.
- Spend some time with Swagger.Codegen and adapt it (it already does a fair job for generating Nancy server code and with a few
adjustments to moustache templates it was just what I needed).
- Automate your process: write a couple of batches to generate your modules and models from yaml and copy them to your repository.
Benefits? Quite a few:
-
Your YAML definition is now the single truth of your REST contract.
If somewhere something is defferent, it's wrong.
Nancy server code is auto-generated
Client code-bases are auto-generated (in our case it's android, ios and angular)
So whenever I change something in REST contract, all codebases are regenerated and added to projects in one batch. I just have to tell the teams something was updated. They don't have to look through some documents and search for it. They just have their code regenerated and probably see some compile errors, in case of breaking changes.
Do I still use nancy.swagger(.annotations)?
Yes, I do use it in another project, which has just one endpoint with a couple of methods. They don't change often. It's not worth the effort to set up everything, I have my swagger docu fast up and running. But if your project is big, API is changing, and you have multiple code-bases depending on your API, my advice is to invest some time into a real swagger setup.
I am quoting the author answer here from https://github.com/khellang/Nancy.Swagger/issues/59
The installation should be really simple, just pull down the NuGet package, add metadata modules to describe your routes, and hit /api-docs. That should get you the JSON. If you want to add swagger-ui as well, you have to add that manually right now.
No. Not in an automated. https://github.com/yahehe/Nancy.Swagger needs lots of manually created metadata.
There is a nice article here: http://www.c-sharpcorner.com/article/generating-api-document-in-nancy-using-swagger/
Looks like you still have to add swagger-ui separately.

Is "re-cycling" a ClearCase dynamic view without side-effects, and if not, how to rename view?

One of the shops I'm working at relies on dynamic views in ClearCase. The established norm has been to create a new view for each project effort. Over time I've found that I've only needed to have one or two views concurrently active. I've taken to "reusing" a view by changing the config spec (subsequent to check-in, label, release, etc.). So far, it has worked out. Is there any long-term problem with doing that? If not, is there anyway I can re-name the view (change the view tag) to better reflect what the purpose of the view is?
For base ClearCase dynamic views, the only side-effect you can have when recycling a config spec are private files:
Those are store within the dynamic view storage, and not always removed when the config spec is reset.
You also need to make sure no files were left checked-out: they also are stored in the view storage, and once the config spec has changed, they may not be visible/reachable any more (but you should still be able to unco them through the 'find co' GUI).
You cannot rename (change the tag) of a view (dynamic or snapshot)
And, just to be complete, you cannot recycle the config spec of an UCM dynamic view (which reference a stream).
You can try to change the foundation baselines of said stream, but again, that is not always possible.
I vote for scrapping old views and creating views afresh. Besides all teh great inputs from VonC, from the disk space point of view, old views tend to get bulky over time and you soon you wont be a favorite with your sysadmins :-)
From my experience there is no log term affect for using only 2 dynamic views instead of one for each "project". If you don't need the views active concurrently its a good method, thats the beauty if dynamic views they can be updated very fast and very frequently.
For the renaming part, why rename? make a similar new dynamic view (or two) and give it a new name (view tag).

How to merge Drupal database changes

We currently use an SVN repository to ensure everyone's local environments are kept up-to-date. However, Drupal website development is somewhat trickier in that any custom code you write (for instance, PHP code written for a node body) is stored in the DB and the changes aren't recognized by the SVN working copy.
There are a couple of developers who are presently working on the same area of a Drupal site, but we're uncertain about how to best merge our local Drupal database changes together. Committing patches of database dumps seem clumsy at best and is most likely inefficient and error-prone for this purpose.
Any suggestions about how to approach this issue is appreciated!
Unfortunately, database deployment/update is one of Drupals weak spots. See this question & answers as well as this one for some suggestions on how to deal with it.
As for CCK, you could find some hints here.
As for php code in content, I agree with googletorp in that you should avoid doing this. However, if for some reason you absolutely have to do it, you could try to reduce the code to a simple function call. Thus you'd have the function itself in a module (and this would be tracked via SVN). But then you are only a little step from removing the need for the inline code anyways ...
If you are putting php code into your database then you are doing it wrong. Some stuff are inside the database like views and cck fields plus some settings. But if you put php code inside the node body you are creating a big code maintenance problem. You should really use the API and hooks instead. Create modules instead of ugly hacks with eval etc.
All that has been said above is true and good advice.. To answer your practical question, there are a number of recent modules that you could use to transport the changes done by the various developers.
The "Features" modules is a cure the the described issue of Drupal often providing nice features, albeit storing lots of configs and structure in the DB. This module enables you to capture a feature and output it as a pseudo-module (qualifies as a module with .info and code-files and all). Here is how it works:
Select functionality/feature to export
The module analyses the modules, files, DB content that is required to rebuild that feature elsewhere
The module creates a pseudo-module that contains the instructions in #3 and outputs everything (even SQL to rebuild the stuff in the DB) into a module package (as well as sets dependencies for other modules required)
Install the pseudo-module on your new site and enable it
The pseudo-module replicates the feature you exported rebuilding DB data and all
And you can tell your boss you did it all manually with razor focus to avoid even 1 error ;)
I hope this helps - http://drupal.org/project/features
By committing patches of database dumps, do you mean taking an entire extract of the db and committing it after each change?
How about a master copy of the database? Extract all tables, views, sps, etc... into individual files, put them into svn and do your merge edits on the individual objects?

Resources