This is in some ways similar to Ansible local_action on host without local ssh daemon
which tries to only run a localhost inventory without using SSH at all.
In my case I have an actual inventory of hosts, but only want to gather information (the groups each host belongs to) from the inventory. Everything should only need to run locally, without involving SSH at all.
- name: Run Over Hosts
hosts: 'all'
connection: local
gather_facts: false
tasks:
- name: host groups
ansible.builtin.debug:
msg: "{{ inventory_hostname }} {{ group_names | to_json }}"
I then run it using...
ansible-playbook -i inventory.yml list_host_groups
The output of the above is later processed to generated a host to groups list.
But it still calls SSH!
Adding -vvv show it is doing so before even running the first task.
Heck it does not even print the SSH command like it normally does using the -vvv flags!
But SSH is being called as SSH is asking for a password.
How can I stop it calling SSH!
Related
I know this question has been asked before, but I haven't managed to solve it from those answers.
System spec: I am running the server on Ubuntu 22.10 docker version is 20.10.16 and docker-compose version is 1.29.2
What I want to achieve: I have nextcloud running as a docker container and have MariaDB installed on the host machine uncontainerized. I want to use a database I created in MariaDB by nextcloud in docker container. But Hostname is always incorrect, and I get the following error.
Failed to connect to the database: An exception occurred in the driver: SQLSTATE[HY000] [2002] Connection refused
Troubleshooting I've tried so far
Option 1: I added an extra host to my docker compose file so it can listen to the host system. Here's how my docker compose file like
version: '2'
services:
app:
image: nextcloud
restart: always
ports:
- 8080:80
volumes:
- /home/ritzz/nextcloud:/var/www/html
extra_hosts:
- host.docker.internal:host-gateway
However, when adding database if I use host.docker.internal as hostname I still get the error mentioned above.
Option2: Using docker host IP as database hostname. I used the following command to find out my host ip which was 172.17.0.1
ip addr show docker0
However, again adding 172.17.0.1 or 172.17.0.1:3306 returns with the same error.
Option 3: I saw an option on the Internet to use network_mode: host to make the container use the same network as host. However, since docker container uses port 80 and on my host I can't use port 80. This method won't work for me I assume.
Additional Troubleshooting
I made sure Mariadb is with command sudo systemctl status mariadb as well as checked that it's listening to port 3306 using command sudo netstat -tlnp I also logged in to the database with the user and pass using command sudo mysql -u<username> -p<password> <database> and I can login successfully.
I am at the edge with this. Hopefully someone else can help me out
I've created a docker container that contains a mssql Database. On the command line ip a gives an ip address for the container, however trying to ssh into it username#docker_ip_address yields ssh: connect to host ip_address port 22: Connection refused. So I'm wondering if I am even able to ssh into the container so I don't have to always be using the docker tool docker exec .... and if so how would I go about doing that?
To ssh into container you should full-fill followings
SSH server(Openssh) should be installed within the container and ssh service should be running
Port 22 should be published from container (when you run the container).more info here > Publish ports on Docker
docker ps command should display mapped ports 22
Hope above information helps for you to understand the situation...
If your container contains a database server, the normal way to interact with will be through an SQL client that connects to it; Google suggests SQL Server Management Studio and that connector libraries exist for popular languages. I'm not clear what you would do given a shell in the container, and my main recommendation here would be to focus on working with the server in the normal way.
Docker containers normally run a single process, and that's normally the main server process. In this case, the container runs only SQL Server. As some other answers here suggest, you'd need to significantly rearchitect the container to even have it be possible to run an ssh daemon, at which point you need to worry about a bunch of other things like ssh host keys and user accounts and passwords that a typical Docker image doesn't think about at all.
Also note that the Docker-internal IP address (what you got from ip addr; what docker inspect might tell you) is essentially useless. There are always better ways to reach a container (using inter-container DNS to communicate between containers; using the host's IP address or DNS name to reach published ports from the same or other hosts).
Basically, alter your Dockerfile to something like the following - that will install openssh-server, alter a prohibitive default configs and start the service:
# FROM a-image-with-mssql
RUN echo "root:toor" | chpasswd
RUN apt-get update
RUN apt-get install -y openssh-server
COPY entrypoint.sh .
RUN cd /;wget https://gist.githubusercontent.com/spekulant/e04521d6c6e1ccffbd3455c673518c5b/raw/1e4f6f2cb32caf3a4a9f73b02efdcbd5dde4ba7a/sshd_config
RUN rm /etc/ssh/sshd_config; cp sshd_config /etc/ssh/
ENTRYPOINT ["./entrypoint.sh"]
# further commands
Now you've got yourself an image with ssh server inside, all you have to do is start the service, you cant do RUN service ssh start because it won't work - docker specifics, refer to the documentation. You have to use a Entrypoint like the following:
#!/bin/bash
set -e
sh -c 'service ssh start'
exec "$#"
Put it in a file entrypoint.sh next to your Dockerfile - remember to chmod 755 entrypoint.sh it. There's one thing to mention here, you still wouldn't be able to ssh into the container - the default SSH server configuration doesn't allow login into root account using a password. So you either change the configs yourself and provide it to the image, or you can trust me and use the file I created - inspect it with the link from Dockerfile - nothing malicious there, only a change from prohibit-password to yes.
Fortunately for us - MSSQL official images start from Ubuntu so all the commands above fit perfectly into the environment.
Edit
Be sure to ask if something is unclear or I'm jumping too fast.
I tried to connect to oracle db 11 in docker (https://hub.docker.com/r/sath89/oracle-xe-11g/).
Started docker with command:
docker run -d -p 8080:8080 -p 1521:1521 -e DEFAULT_SYS_PASS=sYs-p#ssw0rd sath89/oracle-xe-11g
From this description:
hostname: localhost
port: 1521
sid: xe
username: system
password: oracle
made an url - jdbc:oracle:thin:#192.168.99.100:1521:xe
With squirrel-sql have an error:
class java.sql.SQLException: ORA-00604: error occurred at recursive SQL level 1
ORA-12705: Cannot access NLS data files or invalid environment specified
But if I try to connect with SQLplus thats ok:
docker exec -ti oracle_id sqlplus bash
sqlplus
The SquirrelSQL needs to have some NLS variables set before logging in. For the Docker connection, note that you have "bash" at the end of your command. This not only tells the connection that you'll be using bash shell, it sets up the environment to connect using the bash_rc, (and possibly a profile, too). You're coming from your local machine vs. an SSH, so the local machine is being used instead of the SSH.
I believe there is a squirrel-sql.bat file that could unset and then set the environment or better yet, let's just unset it in the registry and let the local connection take its course:
On your Windows maching:
Do a search for an NLS_LANG subkey in the registry: \HKEY_LOCAL_MACHINE\SOFTWARE\ORACLE, rename it, save the change, reboot and retry.
I’m not familiar with Squirrel SQL, but you may also be missing a proper set up of the jar files. Look and see if your jar files are configured correctly, depending on your version, its going to look something like this:
%Oracle_\jdbc\lib\ojdbc.jar
I am able to do the following manually -
From my ansible controller server
ssh <> (using my userid)
sudo /bin/su - <>
..
now run commands as orafmw
...
When trying to do this same step using ansible -
My playbook has the following entry
- role: fmw-software
become: true
become_user: 'orafmw'
become_method: sudo
become_flags: '/bin/su'
This fails as follows -
ansible-playbook weblogic-fmw-domain.yml
PLAY [Configure Oracle Linux 7.1 with WebLogic 12c R2 domain] ******************
TASK [setup] *******************************************************************
ok: [weblogic]
TASK [fmw-software : Create installer directory] *******************************
fatal: [weblogic]: FAILED! => {"failed": true, "msg": "Timeout (12s) waiting for privilege escalation prompt: "}
to retry, use: --limit #/tmp/ansible-weblogic-fmw-infra-12c-R2-master/weblogic-fmw-domain.retry
PLAY RECAP *********************************************************************
weblogic : ok=1 changed=0 unreachable=0 failed=1
Can anyone point what I might be doing wrong here ?
The docs suggest - http://docs.ansible.com/ansible/become.html
" Only one method may be enabled per host
Methods cannot be chained. You cannot use sudo /bin/su - to become a user, you need to have privileges to run the command as that user in sudo or be able to su directly to it (the same for pbrun, pfexec or other supported methods). "
Is this above section applicable for my usecase ?
The become_flags seem to be redundant to achieving your goal of running commands as the "orafmw" account. As a quick test if you do this:
- role: fmw-software
become: true
become_user: 'orafmw'
become_method: sudo
command: touch /tmp/whomadethis
Does the new file "/tmp/whomadethis" get created on the remote machine and owned by the orafmw account? If so, then replace the call that the command: module makes with the commands you need to run.
Better yet, don't use command: module, rather use built-in Ansible modules with the become_* options set as needed.
We are using Jenkins server for our daily build process and executes some bash scripts on remote hosts over SSH. This scripts are generating html log files on remote hosts.
We are using Copy to slave plugin to copy files on slave machines and Publish over ssh plugin to manage SSH sessions in build process.
Now the question is, We want to copy some files (log files of Scripts) from remote ssh host to Jenkins Server.
Which will be possible and better option for the same (plugin will be better if any).
EDIT :
sshpass is an option, but looking for any plugin or better way to do the job.
use sshpass command to send file in
Build Environment -> Execute Shell script on remote host using ssh ->
Post build script
sample command :
sshpass -p "password" scp path/of/file <new_server_ip>:/path/of/file
This will skip password prompt for scp command and will provide password to scp.
I think you can generate ssh keypair and pass it to the slave as a parameter with, for example, Config File Provider Plugin
Then just use scp to retrieve files using this keypair for authentication process.
Obviously way too late, but in case you're already using publish-over-ssh, want to avoid duplicating the credentials and have a shared library you can use this piece of groovy to obtain the host configuration:
import jenkins.plugins.publish_over_ssh.*
#NonCPS
def getSSHHost(name) {
def found = null
Jenkins.instance.getDescriptorByType(BapSshPublisherPlugin.Descriptor.class).each{
it.hostConfigurations.each{host ->
if (host.name == name) {
found = host
}
}
}
found
}
As mentioned, this either requires a Global Shared Library (so that your code is trusted) or (probably) a number of admin approvals, sorry for that.
This returns a BapSshHostConfiguration.
For a password connection you can do:
def sshHost = getSSHHost('Configuration Name')
def host = [host: sshHost.hostname, user: sshHost.username, password: sshHost.password]
sshHost = null
sh("""
set +x
sshpass -p "${host.password}" scp -o StrictHostKeyChecking=no ${host.user}#${host.host}:filename.extension .
set -x
""")
This copies the file to your local work directory.
Probably not the best code ever, but I'm not a groovy specialist. It works and that is enough for me. (the set +x is to avoid it echoing the command in the log, showing the password). Getting rid of anything Non-CPS (sshHost = null) before you perform a CPS call saves you a lot of headaches :)
Since it took me quite a while to figure out I wanted to share this for whoever comes next.