Simple deployment from ClearCase on Unix - clearcase

I have recently joined a team that has several applications that perform workload automation. They use ClearCase for version control but the development and test environments are (I surmise due to a lack of ClearCase expertise within the team) not checked out/deployed out of ClearCase but simply FTP-ed to respective Unix servers from Window. I said "simple deployment" because all the code is interpreted (Perl and shell) so no need to compile. Needless to say, many things are wrong with this approach, most specifically the lack of version management in these environments from the point of deployment onward.
So I would like to bind our deployments to the repository and start controlling changes but I am only a ClearCase novice. My specific question is: what is it that I deploy, a VIEW or a STREAM? I would say the latter cause views are user-specific while (according to my understanding) a stream is a project trunk off of each view is like a branch and views are integrated into their stream.
If anyone has any pointers on some useful yet succint and lightweight ClearCase tutorials for an "accidental" CM liason, please share.
Alternatively, if you think this task is suitable for Jenkins, despite being relatively simple (no build/compile involved) please chime in.
Thanks in advace

You would need to use the Jenkins ClearCase UCM plugin (in combination with Jenkins ClearCase plugin) in order to start jobs based on a ClearCase Stream
Jenkins will create an UCM snapshot view based on the stream you will specify in it.
See also, for more on the stream:
"Difference between branches and streams in ClearCase?"
"Integration stream vs integration view in ClearCase"

Related

How to Add Missing Foundation Baselines in ClearCase

So Someone from my team has deleted some foundation Baselines from an Integration Stream and now when people rebase their development stream many of the folders and files inside them are missing.
We can see those missing foundation baselines in the development streams of that Integration Stream.
So I wanted to know how I can add those missing foundation Baselines from those dev stream to int stream, we only have access to IBM ClearTeam explorer to do it.
we only have access to IBM ClearTeam explorer to do it.
So you still have the CCRC CLI (rcleartool) (assuming the CCRC WAN Server is upgraded to ClearCase® version 8.0.0.3 or higher)
mkbl, however, is a cleartool command only (not rcleartool) in CC8, but rcleartool mkbl does exist for CC 9.x.
You could use that to recreate your fundation baseline, assuming you can tweak a (preferable dynamic or webview) config spec to select the right versions.
The baseline still exists, so you should be able to rebase the integration stream to re-add the baselines. You may want to describe the baseline to find the stream the baseline was created in to be sure you know everything you need to know about it.
rcleartool rebase -baseline baseline1#/vobs/mypvob,baseline2#/vobs/mypvob
Should be enough to accomplish the job. If your int stream has a successor baseline as the foundation now, and the component is not read-only in your project, you'll get an error message. The error message should tell you more about why you can't do the rebase.

Adobe Experience Manager WorkBench Check out/in Issue

Shortly I ve Windows Server 2012 R2, AEM Forms(6.2), SQLServer(2014) and Workbench(6.2) in same server. At first when i install and configure all of them, i can check out or in my applications from Workbench succesfully. However After my software team executes some scripts at Database, we can not check in/out from workbench. The worst thing when i click check out, workbench gives any error. any log. on event log or server application. It gives nothing and don't do my transaction. I saw at forums some people have same issue but nobody writes solution.
Please if any one knows the solution, share with us. What's wrong with my workbench? what to do fix this issue?
The query that your software team ran turns off security on every single LiveCycle service and makes them run as the system user. This includes the services used by Workbench and is very bad. Some of the services rely on knowing who is logged in to operate correctly. In particular, how can LiveCycle know who has checked in/out a resource if the service always runs as system?
Your best bet is to restore the LiveCycle database - or at least the tb_sc_service_configuration table to be where it was before you ran the script.
If you need to remove security on individual services, you should do it through the admin console, but only do it for your processes. Never do it for systems services unless the Adobe documentation says it is OK.
As JeremyP pointed out, modifying the Adobe database directly is a bad idea. The database should be treated as a black box that is only manipulated by Adobe code (either by doing things in the Adobe tools or making calls to Adobe APIs).
You can either make security changes manually through the adminui (as he indicates, which is the most common way of doing it) or programatically using the Adobe client APIs. See the following links for sample code that uses the APIs:
Removing Security - http://help.adobe.com/en_US/livecycle/10.0/ProgramLC/WS624e3cba99b79e12e69a9941333732bac8-7f35.html
Setting the runAs user - http://help.adobe.com/en_US/livecycle/10.0/ProgramLC/WS624e3cba99b79e12e69a9941333732bac8-7f38.html
My company, 4Point, offers AEM Forms consulting services. We have an in-house Apache Ant library that wraps the code above to automate this (and other) common tasks that are typically required when deploying (and redeploying) AEM Forms solutions. It can be included as part of a consulting engagement.

ClearCase + integration from source + dependency management

In the industry I work in it's customary to do integration from source (i.e. compile all libraries from scratch). This means that the source code tree has to be configured to show the appropriate content.
I know that for binary integration there a lot of tools out there, tailored to the programming languages (Maven, CMake, Gradle, etc.).
We use base ClearCase as a source control tool. How does one go about implementing dependency management when integrating from source? In ClearCase I would imagine this entails setting up the config spec to select the required versions of all of the required files. Are there any tools out there that implement this?
Are there any tools out there that implement this?
Yes: ClearCase UCM, meaning not base ClearCase.
Building from different version of "components" (group of files) is why you have the notion of:
UCM components
baseline: a label applied on all files in an UCM component)
stream, which lists the exact foundation baselines you need for your program to work, or in your case, for your CI to take place.
Any UCM view on an UCM stream would generate the right config spec for you.
This is what a CI engine like Jenkins would use with JENKINS ClearCase UCM Plugin.
UCM does make this easier. But if your organization is politically averse to trying it, you can do a lot of the same thigs using base clearcase.
Streams are not much more than branches with additional metadata added on (activities, timeliness, baseline links, etc)
Baselines are essentially labels with more metadata. That metadata connects baselines to descendant and sibling baselines, and let's you have a baseline that maps baselines across components. It also links baselines to streams so you can't delete a baseline used by a stream.
You don't need UCM to do UCM-like things, it just takes more time and isn't as nicely encapsulated.

Twist to the standard “SQL database change workflow best practices”

Twist to the standard “SQL database change workflow best practices”
Background
ASP.NET/C# Web App
MS SQL
Environments
Production
UAT
Test
Dev
We create patch scripts (XML and sql) that are source controlled in Mercurial. We have cmd line utility that installs patches to DB (utitlity.exe install –patch) from a Release folder the build packages. Patches have meta data that helps with when patch should run and we log patches installed in a table in the target DB. All these were covered in the 3 year old question:
SQL Server database change workflow best practices
Our Problem/Twist
I think this works well for tables, views, functions and stored procedures. We struggle with application configuration data. Here are some touch points on application configurations.
New client. BA performs system study and fit analysis. Out of this comes a configuration word document of what application configurations need to be setup. Note some of these may also come in phases over time. We need to get these new configurations into the system for the developer and client UAT.
Developer works on feature request or bug fix. A new configuration change comes out of that change. The configuration needs to make it into the system for testing and promotion to UAT and up.
QA finds that the developer missed an associated configuration change. That configuration needs to make it into the system for promotion to UAT and up.
Build goes to UAT. Client performs acceptance testing but find they really want to change another unassociated configuration and have it promoted with the changes. In other words they found they want to change a business process by a configuration. The configuration needs to make it into the system for promotion to PRD.
As the client operates in PRD they may tweak application settings. These configurations need to make it into the system for future development and testing.
The general issue is making sure we are accounting for all the configurations and accidently not miss any during promotions which causes grief.
Our Attempts At A Process
a. We have had member of the QA team to write patches (xml and sql) and check those in. This requires a build to make sure those get into the package. With this approach it really just took care of item 1 above and we fell apart on the other items. The nice thing is for the items that made it into the patches it was just an install with the utility.
b. A developer threw together a Config page on the application. All the configurations could be uploaded and downloaded via XML document but it requires the app to be running. For item 1, member of QA team would manually setup configurations in the application and then would download the Config.xml file. This XML file would be used to upload configurations in other environments. We would use text diff tool to look at differences between config.xml files from different environments. This addressed item 1 and the others items but had problems. Problems were not all configurations made it into the XML document (just needs to be fixed by developer), some of the configurations didn’t have a UI in the application so you still had to manually go to the database on some, comparing the XML document with text diff was difficult at time (looked mostly due to sorting but I’m sure there are other issues), XML was not very human readable and finally the XML document did not allow for deleting existing incorrect or outdated configs.
c. Recently we went with option B, but over time for a new client we just started manually tracking configs and promoting them manually by hand (UI and DB) through the promotions. Needless to say lots of human errors.
So we have been looking at solutions. Eventually it would be great to get as much automation in as possible. I’m looking at going with the scripting approach and just focusing on process, documentation and looking at using Redgate data compare in addition to what we had been doing with compare on config.xml. With Redgate we have to create views though and there is no way to create update scripts from that approach except to manually update the scripts. It does at least allow a comparison without the app running. I’m also looking at pulling out the configs from our normal patches and making it a system independent of the build (utility.exe –patch –config). When I say focus on process it will be things like if we compare and find a config change either reported by client or not, we still script it, just means we have to have a process in place to quickly revalidate config install before promoting to the next level. As for documentation looking at making the original QA document a living document instead of just an upfront document. The goal is to try and enhance clarity and reduce missing configurations during promotion. Unfortunately it doesn’t improve speed of delivery.
Does anyone have any recommendations or best practices to pass along. Thanks.
Can I ask exactly what you mean by application configuration. I'm interpreting that as both:
Config files in the web application
Static reference data inside the database
Full disclosure I work for Red Gate. You might be interested in taking a look at Deployment Manager, it's a deployment tool that deploys applications, databases and configuration. It's free for up to 5 projects and target servers.
The approach it uses is to package application code and the database state into packages. These packages can be deployed into dev, test, staging and production environments. The same package is deployed to each environment.
Any application configuration that needs to change between environments is handled in one of the ways below:
Variable substitution in web.config. The tool allows you to specify override values for variables in these files, and set these per environment/server
Substituting the web.config file per environment.
Custom powershell scripts that are run pre/post deploy. You could use these to execute custom SQL based on the environment or server.
Static data within the database, using SQL Source Control's static
data feature. I've written a blog post about how to supply
different sets of static data to different environments/customers.
This allows you to source control the application configurations and deploy them to different environments.

How to "explore" group of servers?

I need to check a group of servers (Unix, Linux) to know what kind of services, software (also version) are running there (check it once for a while and store it in database).
The idea is to have always fresh info about whole environment - its constantly changing. Perhaps you can suggest some solution that is already there?
Currently i am thinking about using Nagios or Cacti + plugins but I am not sure if this solution will be optimal.
Nagios is a very powerful monitoring solution (the best for me) : Open source, Compatible with both linux & windows, reporting & notifications via emails/SMS, Nice interface, Many many plugins...etc I've already worked with & I was very satisfied.
Check Nico Largo's Forum for Install. If you are not familiar to linux command search for FAN : Fully Automated Nagios which is a .iso where nagios is already in.
If you have any trouble during install or configuration post your questions there : https://serverfault.com/
Given that you want to poll for information on the system that can change dynamically, I would look at Check_MK.
It originally started as a plugin for Nagios that would poll a server for running services and generate the necessary configs for monitoring anything it discovered. Since then, it has evolved into a complete monitoring solution that provides its own complete ui (still based on nagios core), so you are safe in running this if you are familiar with nagios already.
See the website: http://mathias-kettner.com/checkmk_monitoring_system.html
You may need to select that you wish to view the "English" perspective of the site on first visit.

Resources