Set clearcase config spec and update snapshot view that exists on another machine - clearcase

I need to create a simple script (batch, python etc.) which I can run from my windows machine that will set the config spec and update the snapshot view that exists on some other machines.
Is there a clearcase command that do that ?
Or should I use psexec or something similar that will run the command on each machine ?
Thanks

There are two issues:
how to contact and run remote commands on other machines?
Considering recent Windows 10 come with SSH integrated with Git for Windows, or even OpenSSH server, you could use that for other Linux machines.
For other Windows machines, psexec or winrs are possible alternatives (See also WMI or ControlUp).
But unless you can open a remote session as admin, you won't probably have access to the snapshot view, which could be created in the Windows User profile (protected).
what ClearCase command to run?
Probably cleartool setcs (I mentioned here a snapshot view would always be updated after a setcs)

Related

Windows clearcase view mapped drives disappear

We have Clearcase version 9.x is running with Linux host. We have started experiencing the view disconnected issues on post-reboot of the user systems ( which are running with Windows pro 10 & windows ClearCase client 8.x). The view mapping drives show as disconnected and we need to re-map the view drives to fix it each time the system reboots. Some cases the view shortcut alone disconnected from the ClearCase explorer and we need to add the view shortcut again to map the drives. The default view drive M: shows, as disconnected in few systems, starting the ClearCase services and adding view shortcuts again, helps here. And few other systems with same configuration working fine without any issues.
I have a few questions on this,
Am I missing anything specific with Windows say like patching, Anti-virus etc
Is the issue exist & common with Windows 10 operating system
How the mapping issues can be fixed, I am looking for some solutions which can be tested
Kindly suggest if you come across this issues
Regarding the drive M:\, thus us directly linked to the ClearCase MVFS service: make sure the "Credentials Manager Service" service is set to run automatically
Check the MVFS (for ClearCase 9.0) is properly installed: that will enable the dynamic views.
Regarding the shortcuts, check "Mapping an automatic view root directory to a drive letter": they should be subst command, which you can make persistent across reboot
But if those subst involve dynamic views, then again, the MVFS service needs to be correctly started, or those drive letters won't show up.
This can happen if the albd is not started by the time the user logs in, this delays the start of the credential manager service. The fast-logon optimization can allow the user to in before services start.
If the views you're using are not local, you can decouple the credential manager service from its dependency on the albd service, and this could help.
( which are running with Windows pro 10 & windows clearcase client 8.x).
You need to upgrade to 9.0.1.x. ClearCase 8 has not been supported for (exactly) 2 years. If your hosts are running Windows 10/1909, you will need to update to 9.0.1.9 as that is the version that has been tested with 1909. Also, the MVFS "network provider" information does not survive a windows "feature update" install as that install is really a full OS install folloed by settings migrations, and MS's migration still leaves something to be desired.
You may want to do the upgrade via the "clean install" method of:
Uninstall ClearCase 8.0.x & reboot
Navigate to C:\ProgramData\IBM and remove the Rational.preserve* directories.
Navigate to \Windows\System32\Drivers and ensure that MVFS*.sys files are no longer present
Install ClearCase 9.0.1.9
ClearCase 9.0.1.8+ changed the MVFS from a 2-part (MVFS + MVFS Storage Filter) to a 1-part (MVFS only) driver configuration. We have seen reports of the old MVFS drivers not being completely removed on upgrade. In the worst cases, the old MVFS driver files were both present alongside the new MVFS driver, and all 3 were somehow loaded. This caused post-upgrade blue screens.

Cleartool commands in CCRC server side?

Am using Clearcase Remote Client(CCRC) and do not have admin rights. Original Clearcase supports 'cleartool' command line interface where as CCRC uses 'rcleartool'. Now, there are some trigger scripts to be placed at the vob level by the admin. Whether at the server side, 'cleartool' commands will work or 'rcelartool'? Only for the client it will be'rcleartool' instead of 'cleartool'?
there are some trigger scripts to be placed at the vob level by the admin.
That would use the mktrtype command, which is a cleartool only command (not rcleartool version)
Even the client-side mktrigger has no rcleartool equivalent.
On the server side, an admin would have access to cleartool and can use those two commands.
It is especially important to note that -- in the case of web and automatic views -- triggers do not execute on the "client" host. They execute on the WAN server. This places a number of limitations on what the triggers can do. For example, all interaction has to be through "clearprompt" commands as interactive triggers are not otherwise supported in web and automatic views.
TO directly answer the question as I read it, the triggers would
have to be placed using cleartool commands at the WAN server or some other LAN client.
need to use cleartool commands (or some other LAN-client API) to query ClearCase and not "rcleartool."
would need to ensure that any interactive elements detect they are running beneath the WAN server (CCASE_WEB_GUI EV set to 1), and use clearprompt as appropriate.

Vagrant Chef path for importing database

I have learned a lot about setting up vagrant with chef and I am hitting a wall since I am new to ruby - vagrant - chef and I am not the biggest developer. Being mostly front end but trying to set up a better environment to develop in.
I have searched and found great answers but left with one final question.
I have this code creating the database but I can not figure out where to place the database to import from...
# import an sql dump from your app_root/data/dump.sql to the my_database database
execute "import" do
command "mysql -u root -p\"#{node['mysql']['server_root_password']}\" my_database < /chef/vagrant_db/database-name.mysql"
action :run
end
So I need to know where the path should start from, the top level home directory, from the top level folder where I run vagrant up? Where it is currently and a few other tried places is not working.
Any ideas would be great. I have search google so much so that I am almost ready to give up.
Thanks
Tim
I would recommend using Chef::Config[:file_cache_path] for this. Let's say you want to get that SQL file from a remote web server:
db = File.join(Chef::Config[:file_cache_path], 'database.mysql')
remote_file db do
source 'http://my.web.server/db.mysql
action :create_if_missing
notifies :run, 'execute[import]', :immediately
end
execute "import" do
command "mysql -u root -p\"#{node['mysql']['server_root_password']}\" my_database < #{db}"
action :nothing
end
This will:
Add idempotency - meaning it won't try to import the database on each run
Leverage Chef's file-cache-path, which is persisted and guaranteed to be writable on supported Chef systems
Extensible (you could easily change remote_file to cookbook_file or some custom resource to fetch the database)
Now, getting the file from Vagrant is a different story. By default, Vagrant mounts the directory where the Vagrantfile is located at on the host (local laptop) at /vagrant on the VM (guest machine). You can mount additional locations (called "shared folders") anywhere on your local laptop.
Bonus
If you are running the database on your local machine, you can actually share the socket over a shared folder with Vagrant :). Then you don't even need MySQL on your VM - it will use the one running on your host laptop.
Sources:
I write a lot of Chef :)

The system cannot find the specified drive in Jenkins

I want to copy some files from a network shared drive (mounted at my local machine as drive Z). I have written a Batch file to copy the contents of Z drive into my local drive. This batch file runs successfully on cmd, but i am having issue when i trigger it through Jenkins. The Jenkins gives the following error:
"The system cannot find the specified drive"
Any help regarding this, will be greatly appreciated.
Thanks,
Nouman.
If you don't want to use Jenkins-plugins or schedule-Tasks here is a "groovy" way:
By Hand:
You can use the Groovy Script-Console provided by Jenkins>Manage Jenkins>Script Console and execute the command to map the network-drive within the Jenkins-service. (Must be repeated, once the Jenkins-service is stopped)
Automation:
Write your Groovy commands to a file named "init.groovy" and place it in your JENKINS_HOME-directory. So the network-drive gets mapped on Jenkins-startup.
Groovy Commands - Windows:
Check available network drives using the Script-Console:
println "net use".execute().getText()
Your init.groovy would look like this:
def mapdrive = "net use z: \\\\YOUR_REMOTE_MACHINE\\SHARED_FOLDERNAME"
mapdrive.execute()
Yes Jenkins uses different login credentials. To map a drives through Jenkins use below command in Jenkins command prompt:
Subst U: \drive\folder
then after that your queries.
You might run into permission issues. Jenkins might be executed with different user credentials; so it does not know the configured drive for the windows share. Instead of using shell scripts I suggest to use a plugin. There is a set of Publish-over plugins that allow deployments to remote systems via a couple of protocols (ssh, cfis etc). Have a look at the CFIS plugin that allows to send artifacts to a windows share. Once the plugin is configured (ie the host is specified in the Manage Jenkins section) you can add to the post build steps Send files to a windows share where you can specify which file(s) shall be sent to which location.
Had this issue where my jenkins job was unable to read files present on the network drive.
I resolved it by adding "net use" command in your pre-build step. i.e.
Open your job.
Go to Pre Steps
From the drop down, select Execute Windows Batch Command
Enter the following command:
net use E: \[server name][Folder name] "[password]" /user:"[userid]"
Click Save
Execute the job
I was able to read files from my network drive by following the steps mentioned above.
It seemed to be a one time activity as after the initial run, I had removed the batch command from my job and it seemed to remember the mapped drive command.
Try adding debugging commands to that bat file, or as separate build step, such as net use, set (pay attention to vars like like HOMEPATH and USERNAME) and plain dir Z:\.
As said in another answer, most likely reason is that Jenkins runs as SYSTEM user, which has different permissions. One way around that is, go to services (for example open Task Manager, go to Services tab in it, click the Services button at the lower right corner of that tab), find Jenkins service, open it's properties, go to "Log on" tab and set your normal user account as one that runs Jenkins.
Basically you can access your network shared drive (Z) using by servername or IP from jenkins command. Write \\192.168.x.xxx\Your_Folder instead of z:\Your_Folder.
For example:
mkdir \\192.168.x.xxx\Your_Folder
I was trying to copy files from one remote computer to other, the easy solution which worked for me is COPY iphone.exe \192.xx.xx.xx\dev(dev is the folder name on c drive in that ip address)
A similar issue showed up for us on Jenkins slaves set up on Windows Server 2008 following this documentation. The Jenkins agent failed to access the mounted network drives even after configuring the agent service with the correct user credentials.
Troubleshooting:
Jenkins could access the mounted network drives by their drive letters when connected via the JNLP agent (Launch agent via Java Web Start).
It stops recognizing the drive letters soon after we install the agent as a Windows service. Configuring the correct user credentials and restarting the agent does not help.
We could still access the drives via the command line while logged in to the machine with the above user.
Stop the agent service from services.msc and then uninstall it by running the command jenkins-slave.exe uninstall. The slave is disconnected at this point.
Reconnect the slave by launching the JNLP agent via Java Web Start. The agent can now access the network drives again.
Synopsis:
Do not install the slave agent as a Windows service to keep accessing your mounted network drives using drive letters. But this is highly unreliable as the agent might fail to restart after a machine reboot. Alternatively, see if Jenkins can access them via \\<ip_address\of\network\drive>.
In order to access your remote drive
just use the command in cmd prompt
pushd "\sharedDrive\Folder1\DestinationFolder"
mkdir FolderName
popd
pushd >> It navigates to the shared drive by creating a virtual drive..
popd >> Gets you back to the local directory

Running batch file remotely using Hudson

What is the simplest way to schedule a batch file to run on a remote machine using Hudson (latest and greatest version)? I was exploring the master slave setup. I created a dumb slave but I am not sure what the parameters should be so that I can trigger the batch file in the remote slave machine.
Basically, I am trying to run 2 different batch files on two different remote machines sequentially, triggered from my machine (the master). The Step by step guide on the Hudson website is a dead link. There are similar questions posted on SO but it does not quite work for me when I use the parameters they mention.
If anyone has done something similar please suggest ways to make this work.
(I know how to set up jobs, and add a step to run a batch file etc what I am having trouble configuring is doing this on a remote machine using hudson in built features)
UPDATE
Thank you all for the suggestions. Quick update on this:
What I wanted to get done is partially working, below are the steps followed to get to it -
Created new Node from Manage Nodes -> New Node -> set # of Executors as 1, Remote FS root set as '/var/hudson', set Launch method as using JNLP, set slavename and saved.
Once slave was set up (from master machine), I logged into the Slave physical machine, I downloaded the _slave.jar from http://masterserver:port/jnlpJars/slave.jar, and ran the following from command line at the download location -> java -jar _slave.jar -jnlpUrl http://masterserver:port/computer/slavename/slave-agent.jnlp. The connection was made successfully.
Checked 'Restrict where this project can be run' in the Master job configuration, and set paramater as slavename.
Checked "Add Build Step" for adding my batch job script
What I am still missing now is a way to connect to 2 slaves from one job in sequence, is that possible?
It is fairly easy and straight forward. Lets assume you already have a slave running. Then you configure the job as if you are locally on the target box. The setting for Restrict where this project can be run needs to be the node that you want to on. This is all for the job configuration.
For the slave configuration read the following pages.
Installing Hudson as a Windows service
Distributed builds
On windows I prefer to run the slave as a service and let the remote machine manage the start up and shut down of the slave. The only disadvantage with this is, you need to upgrade the client every time you update the server Just get the new client.jar from the server, after the upgrade and put it on the slave. Then restart the slave and you are done.
I had troubles using the install as a service option for the slave even though I did it as a local administrator. I used then srvany to wrap the jar into a service. Here is a blog about it. The command that you need to wrap, you will get from your Hudson server from the slave page. For all of this to work, you should set up the slave management as jnlp.
If you have an ssh server on your target machine, you can use the ssl slave settings. These work for me like a charm. I use them with my unix slaves. So far the ssl option with unix is less of an hassle, than the windows service clients.
I had some similar trouble with slave setup and wrote up this blog post - I was running on Linux rather than Windows, but hopefully this will help.
I dont know about how to use built-in hudson features for this job - but in one of my project builds, i run a batch file that in turn uses PSTools
to run the job on a remote server. I found PS tools extremely easy to use - download, unpack and run the command with the right parameters, hence opted to use this.

Resources