rsync local code with a remote clearcase dynamic view - clearcase

I want to syncrhonize a local path with a dynamic clearcase view hosted in a remote machine only accesible through ssh:
local:/me | <== ssh == | me#remote_host:/vobs/me/view_1
Those familiar with clearcase know that in order create a dynamic view you must issue the following command in the remote host 'cleartool setview view_1' where view_1 is the reference to the pre-existant tagged dynamic view, problem is that when i try to run that command with the --rsync-path option to rsync, it never comes back:
$ rsync '--rsync-path=`cleartool setview view_BAAAAAD;/usr/bin/rsync`'
me#remote_host:/vobs/me/view_1 .
cleartool: Error: View tag not found: "setview view_BAAAAAD"
So it seems the command is actually issued, yet when i fed the correct tag:
$ rsync '--rsync-path=`cleartool setview view_1;/usr/bin/rsync`'
me#remote_host:/vobs/me/view_1 .
Then it never comes back, when i use the command in the remote host through ssh it doesn't ask for input (neither tty or stdin).
So i'm stuck with using static views. Any idea?
PD:
The actual scheme is a little bit more complicated since the ssh connection is forwarded
I can use static views but i'd prefer dynamic ones
I cannot install any daemon or script on the remote host

in order create a dynamic view you must issue the following command in the remote host 'cleartool setview view_1' where view_1 i
No you don't.
You only have to start it: cleartool startview view_1
And you can use it in /view/view_1/vobs/avob/....
Avoid setview which creates a subshell in which the PATH might not be correct.

Related

Replacement of cleartool setview exec

For some reason, our company don't support clearcase anymore.So I need to remove it from scripts,use folder to replace it.
For the command
Cleartool setview -exec "$RUN_SCRIPT paramter1 paramter2" $MY_CC_VIEW
$RUN_SCRIPT=/vobs/sw/ecomps/tools/script_remote.sh (a script located in view path)
how to replace it?
cd a path that script located in ,then execute the command?
cd $MY_CC_VIEW_PATH/vobs/sw/ecomps && tools/script_remote.sh paramter1 paramter2
cleartool setview is for setting the view content of dynamic views, so if ClearCase is not running anymore, you would not be able to access any dynamic view anyway.
As I mentioned in "Python and ClearCase setview", never use setview in a script anyway: always use the full /view/viewTag/vobs/aVobTag/... path.
But again, if ClearCase is stopped, that dynamic view path would not be accessible: you should at least checkout snapshot views, whose content would remain accessible even if there is no ClearCase server running.

Connect to docker sqlserver via ssh

I've created a docker container that contains a mssql Database. On the command line ip a gives an ip address for the container, however trying to ssh into it username#docker_ip_address yields ssh: connect to host ip_address port 22: Connection refused. So I'm wondering if I am even able to ssh into the container so I don't have to always be using the docker tool docker exec .... and if so how would I go about doing that?
To ssh into container you should full-fill followings
SSH server(Openssh) should be installed within the container and ssh service should be running
Port 22 should be published from container (when you run the container).more info here > Publish ports on Docker
docker ps command should display mapped ports 22
Hope above information helps for you to understand the situation...
If your container contains a database server, the normal way to interact with will be through an SQL client that connects to it; Google suggests SQL Server Management Studio and that connector libraries exist for popular languages. I'm not clear what you would do given a shell in the container, and my main recommendation here would be to focus on working with the server in the normal way.
Docker containers normally run a single process, and that's normally the main server process. In this case, the container runs only SQL Server. As some other answers here suggest, you'd need to significantly rearchitect the container to even have it be possible to run an ssh daemon, at which point you need to worry about a bunch of other things like ssh host keys and user accounts and passwords that a typical Docker image doesn't think about at all.
Also note that the Docker-internal IP address (what you got from ip addr; what docker inspect might tell you) is essentially useless. There are always better ways to reach a container (using inter-container DNS to communicate between containers; using the host's IP address or DNS name to reach published ports from the same or other hosts).
Basically, alter your Dockerfile to something like the following - that will install openssh-server, alter a prohibitive default configs and start the service:
# FROM a-image-with-mssql
RUN echo "root:toor" | chpasswd
RUN apt-get update
RUN apt-get install -y openssh-server
COPY entrypoint.sh .
RUN cd /;wget https://gist.githubusercontent.com/spekulant/e04521d6c6e1ccffbd3455c673518c5b/raw/1e4f6f2cb32caf3a4a9f73b02efdcbd5dde4ba7a/sshd_config
RUN rm /etc/ssh/sshd_config; cp sshd_config /etc/ssh/
ENTRYPOINT ["./entrypoint.sh"]
# further commands
Now you've got yourself an image with ssh server inside, all you have to do is start the service, you cant do RUN service ssh start because it won't work - docker specifics, refer to the documentation. You have to use a Entrypoint like the following:
#!/bin/bash
set -e
sh -c 'service ssh start'
exec "$#"
Put it in a file entrypoint.sh next to your Dockerfile - remember to chmod 755 entrypoint.sh it. There's one thing to mention here, you still wouldn't be able to ssh into the container - the default SSH server configuration doesn't allow login into root account using a password. So you either change the configs yourself and provide it to the image, or you can trust me and use the file I created - inspect it with the link from Dockerfile - nothing malicious there, only a change from prohibit-password to yes.
Fortunately for us - MSSQL official images start from Ubuntu so all the commands above fit perfectly into the environment.
Edit
Be sure to ask if something is unclear or I'm jumping too fast.

Remotely launching a process with LLDB

I'm trying to remotely launch and debug a new process with lldb without much success.
Attaching to an already launched process works well by running these commands:
process connect <url>
process attach -P gdb-remote --pid <pid>
But if I want debugserver to launch the executable by itself I'm running into troubles. Especially, I have no clue what arguments should I pass to target create.
According to this page LLDB "will transparently take care of [..] downloading the executable in order to be able to debug", yet target create seem to always require a local file. If I specify the remote file via -r I get either unable to open target file or remote --> local transfer without local path is not implemented yet errors. If I set the target to a local file (such as a local copy of the remote's loader executable) without using -r, then attempt to run process launch -p gdb-remote -s <remote path> LLDB will attempt running the local path on the remote machine and fail.
What are the correct commands I need to use in order to launch a remote process?
After contacting LLDB's mailing list Greg updated the documentation page, which now clearly explains what I have to do (Specifically I was missing the script lines, which appear to be the correct way to set the remote executable path)

PostgreSQL: How to create two instances in same window machine?

I need to have additional instance for our production server.
Is it possible?
Where to begin?
Using Postgresql 9.1 on Windows Server
If you already have the binaries, then adding a second instance ("cluster") is done by running initdb and then registering that new instance as a Windows service.
(I will not prefix the name of the executables with the path they are stored in. You need to either add the bin directory of the Postgres installation to your system wide PATH, use fully qualified names, or simply change into the bin directory to make it the current directory)
To do that, open a command line (cmd.exe) and use initdb to create the instance:
initdb -D c:\Data\PostgresInstance2 -W -A md5
-W makes initdb prompt you for the name and password to be used as the superuser of that instance - make sure you remember the username and passwords you have given. -D specifies where the cluster should be created. Do NOT create that under c:\Program Files.
Once the instance (cluster) is initialized edit c:\Data\PostgresInstance2\postgresql.conf to use a different port, e.g. port = 5433. If the instance should be reachable from the outside you also need to adjust listen_addresses.
You can check if everything works by manually starting the new instance:
pg_ctl start -D c:\Data\PostgresInstance2
Once you have change the port (and adjusted other configuration parameters) you can create a Windows service for the new cluster:
pg_ctl register -N postgres2 -D c:\Data\PostgresInstance2
The service will execute with the "Local Network Account", so you have to make sure the privileges on the data directory are setup properly.
#NewSheriff
Your start command for your second server needs to use the port you specified in config
e.g. if using port 5433 instead of port 5432
then adding:
-o "-p 5433"
to the end of your start-up command should get past the error message you mentioned

How to copy file from SSH remote host to Jenkins Server

We are using Jenkins server for our daily build process and executes some bash scripts on remote hosts over SSH. This scripts are generating html log files on remote hosts.
We are using Copy to slave plugin to copy files on slave machines and Publish over ssh plugin to manage SSH sessions in build process.
Now the question is, We want to copy some files (log files of Scripts) from remote ssh host to Jenkins Server.
Which will be possible and better option for the same (plugin will be better if any).
EDIT :
sshpass is an option, but looking for any plugin or better way to do the job.
use sshpass command to send file in
Build Environment -> Execute Shell script on remote host using ssh ->
Post build script
sample command :
sshpass -p "password" scp path/of/file <new_server_ip>:/path/of/file
This will skip password prompt for scp command and will provide password to scp.
I think you can generate ssh keypair and pass it to the slave as a parameter with, for example, Config File Provider Plugin
Then just use scp to retrieve files using this keypair for authentication process.
Obviously way too late, but in case you're already using publish-over-ssh, want to avoid duplicating the credentials and have a shared library you can use this piece of groovy to obtain the host configuration:
import jenkins.plugins.publish_over_ssh.*
#NonCPS
def getSSHHost(name) {
def found = null
Jenkins.instance.getDescriptorByType(BapSshPublisherPlugin.Descriptor.class).each{
it.hostConfigurations.each{host ->
if (host.name == name) {
found = host
}
}
}
found
}
As mentioned, this either requires a Global Shared Library (so that your code is trusted) or (probably) a number of admin approvals, sorry for that.
This returns a BapSshHostConfiguration.
For a password connection you can do:
def sshHost = getSSHHost('Configuration Name')
def host = [host: sshHost.hostname, user: sshHost.username, password: sshHost.password]
sshHost = null
sh("""
set +x
sshpass -p "${host.password}" scp -o StrictHostKeyChecking=no ${host.user}#${host.host}:filename.extension .
set -x
""")
This copies the file to your local work directory.
Probably not the best code ever, but I'm not a groovy specialist. It works and that is enough for me. (the set +x is to avoid it echoing the command in the log, showing the password). Getting rid of anything Non-CPS (sshHost = null) before you perform a CPS call saves you a lot of headaches :)
Since it took me quite a while to figure out I wanted to share this for whoever comes next.

Resources