Is it okay to copy my checked out file from the actual directory to my home directory in clearcase? - clearcase

I am new to clear case. Our organizations code is versioned using clear case and I have to edit some code. The codes are database .ddl file. so 2 .ddl files for a package.
I have checked out the pieces of code that I have to use. But I can not see them anywhere. I have checked the FTP client I am using, as well as my local.
Now I am confused about two parts:
After checking out do I copy the .ddl files from the current location to my clearcase home and then download them to pc and use them? That is what I am doing right now.
or is there any other way to generate the ddl files from PL/SQL developer?
I can see the package and package body but can not find the .ddl files.
here I am attaching the clearcase terminal commands and responses:
denoad32:ddl $ cleartool lsco -me
--04-03T03:02 Sayan.Sikdar checkout version "XXONT_OH_REL_SC_HOLD_PB.ddl" from /main/R12/8 (reserved)
--04-03T03:02 Sayan.Sikdar checkout version "XXONT_OH_REL_SC_HOLD_PS.ddl" from /main/R12/3 (reserved)
What I am doing right now is I have check the files out. Now that I have checked the files out I am copying it from their current location to my view home. Then I am downloading it and using it.

Basically, you have checked out the files with the command "cleartool co ". in order to be able to access the files, you need to be inside your Clearcase view. If you are in the same session as when you performed the check out, you should have access to the files you have checked out.
The usual workflow is :
checkout the file
modify and save the file
checkin the file
All these must be done inside a Clearcase view.

download them to pc and use them?
If your PC has a ClearCase Client, it can host a ClearCase view (snapshot or dynamic) and will download automatically checked out files.
is there any other way to generate the ddl files from PL/SQL developer
If there is, that would explain why you don't see those files: they can be generated.
pg_dump -U user_name -h host database -s -t table_or_view_names -f table_or_view_names.sql

Related

What "option" to use with "WGET" for selecting only few files with particular extension from a FTP directory

I am trying to download files with particular datestamp as an extension from a folder through FTP server. Since the folder contains all other files, I wanted to download only files with a particular datestamp.
I tried using wget files_datestamp*.extension, which didn't work.
I also tried using wget -i files_datestamp*.extension, which downloads all.
My question is: What option to use with wget to download only particular files that I am interested in?
wget http://collaboration.cmc.ec.gc.ca/cmc/CMOI/NetCDF/NMME/1p0deg/#%23%23/CanCM3_201904_r4i1p1_20190501*.nc4
The link you've shared is over HTTP and not FTP. As a result, it is not possible to glob over the filenames, that is feasible only over FTP.
With HTTP, it is imperative that you have access to a directory listing page which tells you which files are available. Then use -r --accept-regex=<regex here> to download your files

how to install check_inode plugin in nagios

i have to install a plugin on a red hat server where nagios is already configured.
the plugin to be installed is inode_checker which i got from this link
how to install inode checker in nagios
but when i opened this link i could find a shell script here.
now i am not sure whether i have to place the shell script directly on the server in the location /usr/local/nagios/libexec/ or is there any other way to do it since the other plugins available in this location seems to be different and i am not able to open them.
what am i doing wrong here?please advise.
Yes, this is a bash script so simply download and place it in the folder where you have other scripts sitting. Make sure to make it executable like
chmod +x scriptname
Then you should be able to use it in nagios by creating a Command object. You can find the location of the folder where your scripts are located by looking at the resources.cfg file which should hold something like below:
$USER1$=/usr/lib64/nagios/plugins
Hope this helps.

Downloading artifacts from Jenkins using wget or curl

I am trying to download an artifact from a Jenkins project using a DOS batch script. The reason that this is more than trivial is that my artifact is a ZIP file which includes the Jenkins build number in its name, hence I don't know the exact file name.
My current plan of attack is to use wget pointing at: /lastSuccessfulBuild/artifact/
to do some sort of recursive/mirror download.
If I do the following:
wget -r -np -l 1 -A zip --auth-no-challenge --http-user=**** --http-password=**** http://*.*.*.*:8080/job/MyProject/lastSuccessfulBuild/artifact/
(*s are chars I've changed for posting to SO)
I never get a ZIP file. If I omit the -A zip option, I do get the index.html, so I think the authorisation is working, unless it's some sort of session caching issue?
With -A zip I get as part of the response:
Removing ...+8080/job/MyProject/lastSuccessfulBuild/artifact/index.html since it should be rejected.
So I'm not sure if maybe it's removing that file and so not following its links? But doing -A zip,html doesn't work either.
I've tried several wget options, and also curl, but I am getting nowhere.
I don't know if I have the wrong wget options or whether there is something special about Jenkins authentication.
You can add /*zip*/desired_archive_name.zip to any folder of the artifacts location.
If your ZIP file is the only artifact that the job archives, you can use:
http://*.*.*.*:8080/job/MyProject/lastSuccessfulBuild/artifact/*zip*/myfile.zip
where myfile.zip is just a name you assign to the downloadable archive, could be anything.
If you have multiple artifacts archived, you can either still get the ZIP file of all of them, and deal with individual ones on extraction. Or place the artifact that you want into a separate folder, and apply the /*zip*/ to that folder.

Is it possible to find a list of all hijacked files in a view in CCRC?

In the ClearCase Remote Client is it possible to find a list of all hijacked files in a given view?
Right-click on your view and select Show Pending Changes. All of your hijacked files will be displayed at the top of the list under the heading Hijacked Resources.
It's also possible through the UI, albeit indirectly.
If you run "Refresh > Update from Repository..." off the context menu, the UI will present you with a list of files it did not update upon completion. This will include all hijacked files.
If you're looking to check out the hijacked files, you can select them all and check them out from that display.
In a snapshot view, it is possible to do so using 'cleartool ls -recurse | grep hijacked' (Unix/Linux) or 'cleartool ls -recurse | findstr "hijacked"' (for Windows)
See the link Identifying hijacked files in a snapshot view
In a Web view or CCRC view, I would think that you should be able to do the same if you have installed rcleartool.
The command "rcleartool ls -recurse | grep hijacked" should work the same way.
Note: Depending on the version of CC on your server, rcleartool you need to use is either a separate zip or either included in the CCRC rich client. It is not included by default in the CCRC plugin for eclipse.
Detecting hijacked files in a web or CCRC view can be tricky, depending on the state of the view itself.
For instance, the .COPYAREA.DB file, if missing or corrupt, means that all or some of the loaded files will appear to be hijacked. (see "About the .copyarea.dat and .copyarea.db files")
Other bugs (swg1PK64597, swg21433085) can affect the list of hijacked files as well, depending on your ClearCase version and your OS.
Another way to list hijacked files it to look for "skipped object" after an rcleartool update:
rcleartool update -noverwrite
(with -nov/erwrite leaving all hijacked files in the view with their current modifications)
Hijacking an element in a snapshot view involves making it writable and making a change to it. There is no lshijack or lsprivate -hijacked command to list the files. While the cleartool update operation does generate a log identifying hijacked files, the best way is to use cleartool ls command which identifies hijacked versions in much less time than an update would take.
Use cleartool ls from the command line and look for the [hijacked] tag on objects.
Example output:
%> cleartool ls
archive.ppt##\main\1 [hijacked] Rule: \main\LATEST
project.doc##\main\1 Rule: \main\LATEST
doc_resources.ppt##\main\2 [hijacked] Rule: \main\LATEST
To obtain a list of all hijacked files in a snapshot view, use the following command:
On UNIX® and Linux® you can run the following command from a snapshot view:
cleartool ls -recurse | grep "hijacked"
On Microsoft® Windows® you can run the following command from a snapshot view:
cleartool ls -recurse | findstr "hijacked"
This command will perform a recursive "cleartool ls" and then use "grep" or
"findstr"command respectively to filter any lines that have the [hijacked] line associated with them.
Note: GREP is a native UNIX command; however, it can be run on Windows if the utility is installed. The grep tool comes with applications like GNU, Free Software Foundation or Cygwin

how to override the already existing workspaces in rtc using command scm or lscm

I have the requirement as i need to connect to the rtc and automatically checkout the files from the stream to the repository workspace.
I am writing the following commands in the bat file.
lscm login -r https://rtc.usaa.com/ccm -u uname -P password -n nickname -c
scm create workspace (workspacename) -r nickname -s (streamname)
lscm load workspace name -r nickname -d directorypath(c:codebase/rtc)
lscm logout -r nickname
while i am executing the above batch file for the first time it is creating the workspace and loading the project into the workspace path.
while i am executing the above batch file for the second time again it is creating the duplicate workspace with the same name and getting exception while loading.
I want to override the already existing workspace every time while loading but I didn't find a command for that.
can you please provide me any other way of doing it or any command that solves my problem
It will be good to delete existing local workspace sandbox before loading the new one. In my setup, we execute the following steps:
1. Delete local sandbox (if it makes sense delete existing repository workspace too)
2. Create new repository workspace
3. Load the new repository workspace to local sandbox
Either create a uniquely named workspace (perhaps by sticking a time stamp into the name?) and then delete it when you're done, or use the workspace's UUID from the creation step.
Instead of deleting and again writing the files into workspace, you can try accept incoming changes before load and then using "--force" attribute you can overwrite only the changes made files.
Accept using - SCM accept --flow-components -r <> -u <> -p <> --target
Use force at the end of the load command which you using.
this should work fine.

Resources