I am trying to access an EC2 instance with Ansible installed on another EC2 instance; my hosts are setup with a bastion host. I have been following this post http://blog.scottlowe.org/2015/12/24/running-ansible-through-ssh-bastion-host/ which seems fairly straight forward.
Note: I read this other thread (Ansible with a bastion host / jump box?) but it didn't help.
I can ssh directly and ping from this host to the IP given from dynamic inventory (an public IP); but why is a simple ansible ping failing when actual ssh works and pingable?
root#ip-host:/etc/ansible# ansible -i /etc/ansible/inventory/ec2.py tag_managed_ansible -m ping -vvvv
Using /etc/ansible/ansible.cfg as config file
Loaded callback minimal of type stdout, v2.0
<x.x.x.x> ESTABLISH SSH CONNECTION FOR USER: ubuntu
<x.x.x.x> SSH: EXEC ssh -C -vvv -F /root/.ssh/config -o ControlMaster=auto -o ControlPersist=10m -o 'IdentityFile="/home/ubuntu/.ssh/asdev.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ubuntu -o ConnectTimeout=10 -o 'ControlPath=~/.ssh/ansible-%r#%h:%p' x.x.x.x '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1466601336.03-126192442556847 `" && echo ansible-tmp-1466601336.03-126192442556847="` echo
$HOME/.ansible/tmp/ansible-tmp-1466601336.03-126192442556847 `" ) && sleep 0'"'"''
x.x.x.x | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh.",
"unreachable": true
}
Debug from direct ssh which works (via proxy command setup in /root/.ssh/config)
root#ip-host:/etc/ansible# ssh devtest3 -v
OpenSSH_6.6.1, OpenSSL 1.0.1f 6 Jan 2014
debug1: Reading configuration data /root/.ssh/config
debug1: /root/.ssh/config line 1: Applying options for *
debug1: /root/.ssh/config line 769: Applying options for devtest3
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 1: Applying options for *
debug1: /etc/ssh/ssh_config line 769: Applying options for devtest3
debug1: Hostname has changed; re-reading configuration
debug1: Reading configuration data /root/.ssh/config
debug1: /root/.ssh/config line 1: Applying options for *
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 1: Applying options for *
debug1: auto-mux: Trying existing master
debug1: multiplexing control connection
debug2: fd 6 setting O_NONBLOCK
debug3: fd 6 is O_NONBLOCK
debug1: channel 1: new [mux-control]
debug3: channel_post_mux_listener: new mux channel 1 fd 6
debug3: mux_master_read_cb: channel 1: hello sent
debug2: set_control_persist_exit_time: cancel scheduled exit
debug3: mux_master_read_cb: channel 1 packet type 0x00000001 len 4
debug2: process_mux_master_hello: channel 1 slave version 4
debug3: mux_master_read_cb: channel 1 packet type 0x10000004 len 4
debug2: process_mux_alive_check: channel 1: alive check
debug3: mux_master_read_cb: channel 1 packet type 0x10000002 len 50
debug2: process_mux_new_session: channel 1: request tty 1, X 1, agent 0, subsys 0, term "xterm-256color", cmd "", env 0
debug3: mm_receive_fd: recvmsg: Resource temporarily unavailable
debug3: mm_receive_fd: recvmsg: Resource temporarily unavailable
debug3: mm_receive_fd: recvmsg: Resource temporarily unavailable
debug3: process_mux_new_session: got fds stdin 7, stdout 8, stderr 9
debug1: channel 2: new [client-session]
debug2: process_mux_new_session: channel_new: 2 linked to control channel 1
debug2: channel 2: send open
debug2: callback start
debug2: client_session2_setup: id 2
debug2: channel 2: request pty-req confirm 1
debug2: channel 2: request shell confirm 1
debug3: mux_session_confirm: sending success reply
debug2: callback done
debug2: channel 2: open confirm rwindow 0 rmax 32768
debug1: mux_client_request_session: master session id: 2
debug2: channel_input_status_confirm: type 99 id 2
debug2: PTY allocation request accepted on channel 2
debug2: channel 2: rcvd adjust 2097152
debug2: channel_input_status_confirm: type 99 id 2
debug2: shell request accepted on channel 2
Last login: Wed Jun 22 13:20:11 2016 from
ubuntu#ip-host:~$
Here's ssh setting of the ansible.cfg:
[ssh_connection]
ssh_args = -F /root/.ssh/config -o ControlMaster=auto -o ControlPersist=10m
control_path = ~/.ssh/ansible-%%r#%%h:%%p
Setting in /root/.ssh/config:
Host devtest3
HostName x.x.x.x
Port 22
User ubuntu
StrictHostKeyChecking no
IdentitiesOnly yes
IdentityFile ~/.ssh/asdev.pem
#(I tried both)
#ProxyCommand ssh -W %h %p proxy
ProxyCommand ssh -q proxy nc -q0 %h %p
Yes. Dynamic inventory reference by IP whereas in my .ssh/config it originally had host entry. Solution was to define wildcard IP in .ssh/config.
https://groups.google.com/forum/#!msg/ansible-project/Y_OBPUFeG-M/buVfxdRuKAAJ;context-place=forum/ansible-project
Related
To All,
I am writing a service running HTTPS protocol that accept secure connection using Openssl.
After that, I tested SSL connection using nmap with the following command:
nmap --script ssl-enum-ciphers -p 443 192.168.2.1
Nmap scan report for 192.168.2.1
Host is up (0.0029s latency).
PORT STATE SERVICE
443/tcp open https
| ssl-enum-ciphers:
| TLSv1.2:
| ciphers:
| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 (secp256k1) - A
| TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (secp256k1) - A
| compressors:
| NULL
| cipher preference: client
|_ least strength: A
However, if the the argument -sV is added, then it displays following
nmap --script ssl-enum-ciphers -sV -p 443 192.168.2.1
Starting Nmap 7.01 ( https://nmap.org ) at 2021-05-25 09:15 CST
Nmap scan report for 192.168.2.1
Host is up (0.0030s latency).
PORT STATE SERVICE VERSION
443/tcp open ssl/https?
Service detection performed. Please report any incorrect results at https://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 12.79 seconds
the -sV is used to probe service/version info, I am wondering is it because I am using ECHDE only?
Anyway, here's how I setup my SSL connection (Remove error checking for easy reading).
SSL_library_init();
SSL_load_error_strings();
CTX = SSL_CTX_new(TLSv1_2_server_method());
SL_CTX_set_cipher_list(ctx, "ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384");
SSL_CTX_ctrl((CTX),SSL_CTRL_SET_ECDH_AUTO,1,NULL);
SSL_CTX_use_certificate_file(CTX, pem, SSL_FILETYPE_PEM);
SSL_CTX_use_PrivateKey_file(CTX, pem, SSL_FILETYPE_PEM);
SSL_CTX_use_certificate_chain_file(CTX, chain);
I am suspecting the ciphers ECDHE, because if I use Cipher list "AES128-SHA256:AES256-SHA256:AES128-GCM-SHA256:AES256-GCM-SHA384", everything seems to work fine.
Any help is appreciated, thanks.
I would like to confirm that my message has been saved on the CAN bus with socketCAN library.
The socketCAN documentation describes this possibility when using the recvmsg() function, I have problems with its implementation.
The function I want to achieve is to confirm that my message won the arbitration process.
I think mentioning recvmsg(2) you refer to the following paragraph of the SocketCAN docs:
MSG_CONFIRM: set when the frame was sent via the socket it is received on.
This flag can be interpreted as a 'transmission confirmation' when the
CAN driver supports the echo of frames on driver level, see 3.2 and 6.2.
In order to receive such messages, CAN_RAW_RECV_OWN_MSGS must be set.
The key words here are "when the
CAN driver supports the echo of frames on driver level", so you have to ensure that first. Next, you need to enable the corresponding flags. Finally, such confirmation has nothing to do with arbitration. When a frame looses arbitration, the controller tries to re-transmit it as soon as the bus becomes free.
I think you can use the command "candump can0/can1" on your PC, it will shows the CAN packet received on given CAN interface.
Usage: candump [options] <CAN interface>+
(use CTRL-C to terminate candump)
Options: -t <type> (timestamp: (a)bsolute/(d)elta/(z)ero/(A)bsolute w date)
-c (increment color mode level)
-i (binary output - may exceed 80 chars/line)
-a (enable additional ASCII output)
-b <can> (bridge mode - send received frames to <can>)
-B <can> (bridge mode - like '-b' with disabled loopback)
-u <usecs> (delay bridge forwarding by <usecs> microseconds)
-l (log CAN-frames into file. Sets '-s 2' by default)
-L (use log file format on stdout)
-n <count> (terminate after receiption of <count> CAN frames)
-r <size> (set socket receive buffer to <size>)
I'm trying to leverage my existing (fully configured and working) Samba AD DC as authentication for XWiki, and other apps.
As such, I'm first trying to do a successful ldapsearch from the XWiki server.
The following command works on the Samba server, but not on the XWiki client:
ubuntu#xwiki:~$ ldapsearch -x -LLL -E pr=200/noprompt -H ldaps://10.0.1.191/ -D "CN=Administrator,CN=Users,DC=ad,DC=nitssolutions,DC=com" -w 'SambaNovi2018' -b 'DC=ad,DC=nitssolutions,DC=com' -s sub '(sAMAccountName=*)' cn mail memberOf
ldap_sasl_bind(SIMPLE): Can't contact LDAP server (-1)
However, if I do:
ping 10.0.1.191
I get:
ubuntu#xwiki:~$ ping 10.0.1.191
PING 10.0.1.191 (10.0.1.191) 56(84) bytes of data.
64 bytes from 10.0.1.191: icmp_seq=1 ttl=64 time=135 ms
64 bytes from 10.0.1.191: icmp_seq=2 ttl=64 time=138 ms
64 bytes from 10.0.1.191: icmp_seq=3 ttl=64 time=146 ms
^C
--- 10.0.1.191 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 135.741/140.317/146.970/4.832 ms
and if I do:
telnet 10.0.1.191 636
I'm able to connect.
EDIT: Additional information:
I added a -d 1 to the ldapsearch command line, and now I get:
ubuntu#xwiki:~$ ldapsearch -d 1 -x -LLL -E pr=200/noprompt -H ldaps://10.0.1.191/ -D "CN=Administrator,CN=Users,DC=ad,DC=nitssolutions,DC=com" -w 'SambaNovi2018' -b 'DC=ad,DC=nitssolutions,DC=com' -s sub '(sAMAccountName=*)' cn mail memberOf
ldap_url_parse_ext(ldaps://10.0.1.191/)
ldap_create
ldap_url_parse_ext(ldaps://10.0.1.191:636/??base)
ldap_sasl_bind
ldap_send_initial_request
ldap_new_connection 1 1 0
ldap_int_open_connection
ldap_connect_to_host: TCP 10.0.1.191:636
ldap_new_socket: 3
ldap_prepare_socket: 3
ldap_connect_to_host: Trying 10.0.1.191:636
ldap_pvt_connect: fd: 3 tm: -1 async: 0
attempting to connect:
connect success
TLS: peer cert untrusted or revoked (0x42)
TLS: can't connect: (unknown error code).
ldap_err2string
ldap_sasl_bind(SIMPLE): Can't contact LDAP server (-1)
Note, in particular, this line:
TLS: peer cert untrusted or revoked (0x42)
I'm going to try researching this error further, but as of now, I'm still stuck...
EDIT2: Still more additional information:
When I run this command, with the -d 1 parameter on the Samba server, the command works, in spite of having the:
TLS: peer cert untrusted or revoked (0x42)
in the debug output....
Continuing to dig....
Help?
And here I go, answering my own question again...sigh. I should post here more often. Helps me clearly lay out the problem, which inevitably leads to finding a solution.
Anyhow, the solution was:
I had a file, /etc/ldap/ldap.conf on my sambadc machine as well as my xwiki client machine, but the content differed.
The sambadc machine had:
#
# LDAP Defaults
#
# See ldap.conf(5) for details
# This file should be world readable but not world writable.
#BASE dc=example,dc=com
#URI ldap://ldap.example.com ldap://ldap-master.example.com:666
#SIZELIMIT 12
#TIMELIMIT 15
#DEREF never
# TLS certificates (needed for GnuTLS)
#TLS_CACERT /etc/ssl/certs/ca-certificates.crt
TLS_REQCERT allow
Which worked.
But, my xwiki machine had:
#
# LDAP Defaults
#
# See ldap.conf(5) for details
# This file should be world readable but not world writable.
#BASE dc=example,dc=com
#URI ldap://ldap.example.com ldap://ldap-master.example.com:666
#SIZELIMIT 12
#TIMELIMIT 15
#DEREF never
# TLS certificates (needed for GnuTLS)
TLS_CACERT /etc/ssl/certs/ca-certificates.crt
which failed.
When I commented out the TLS_CACERT line, and added the TLS_REQCERT line, it all started working as expected.
I have formed a command for fetching established port connection using nagios check_by_ssh module.
I am able to get the output when I run the command, however after placing the command in the commands.cfg file I am seeing "check_by_ssh: skip-stderr argument must be an integer " in the GUI. Any suggestion on this would be of great help.
Command:
/usr/local/nagios/libexec/check_by_ssh -l fuseadmin -H <hostname> -C "netstat -punta | grep -i ESTABLISHED | wc -l | awk '{if (\$0>2500) {print \"CRITICAL: Established Socket Count: \"\$0} else {print \"OK: Established Socket Count: \"\$0}}'" -i ~/.ssh/id_dsa -E
OK: Established Socket Count: 67
Commands.cfg:
define command {
command_name netstat_cnt_estanblished_gt_2500_fuse01
command_line /usr/local/nagios/libexec/check_by_ssh -l fuseadmin -H a0110pcsgesb01 -C "netstat -punta | grep -i ESTABLISHED | wc -l 2>&1 | awk '{if (\$0>2500) {print \"CRITICAL: Established Socket Count: \"\$0} else {print \"OK: Established Socket Count: \"\$0}}'" -i ~/.ssh/id_dsa -E
}
Service Definition
#netstat_cnt_estanblished_gt_2500_csg2.0
define service{
use generic-service ; Name of service template to use
host_name <hostname>
service_description Netstat Established Count
event_handler send-service-trap-fms
event_handler_enabled 1
check_command netstat_cnt_estanblished_gt_2500_fuse01
max_check_attempts 1
notifications_enabled 1 ; Service notifications are enabled
check_period 24x7 ; The service can be checked at any time of the day
max_check_attempts 3 ; Re-check the service up to 3 times in order to determine its final (hard) state
check_interval 2 ; Check the service every 10 minutes under normal conditions
retry_interval 2 ; Re-check the service every two minutes until a hard state can be determined
contact_groups fuse_users ; Notifications get sent out to everyone in the 'admins' group
notification_options w,u,c,r ; Send notifications about warning, unknown, critical, and recovery events
notification_interval 30 ; Re-notify about service problems every hour
notification_period 24x7
}
**I have changed the actual hostname to due to compliance
here it says:
check_by_ssh: print command output in verbose mode
right now it is not possible to print the command output of ssh. check_by_ssh
only prints the command itself. This patchs adds printing the output too. This
makes it possible to use ssh with verbose logging which helps debuging any
connection, key or other ssh problems.
Note: you must use -E,--skip-stderr=<high number>, otherwise check_by_ssh would
always exit with unknown state.
Example:
./check_by_ssh -H localhost -o LogLevel=DEBUG3 -C "sleep 1" -E 999 -v
Meaning: you should just have to specify a number after "-E", like -E 999, in your definition (like the example in above code-block says)
... even though, it's confusing (maybe a bug?), because the command help of check_by_ssh says:
-E, --skip-stderr[=n]
Ignore all or (if specified) first n lines on STDERR [optional]
I have a router with nat port forwarding configured. I launched a http copy of big file via the nat. The http server is hosted on the LAN PC which contains the big file to download. I launched the file download from WAN PC.
I disabled the nat rule when file copy is running. the copy of file keep remaining. I want to stop the copy of file when I disable the nat forward rule with conntrack-tool.
my conntrack list contains the following conntrack session
# conntrack -L | grep "33.13"
tcp 6 431988 ESTABLISHED src=192.168.33.13 dst=192.168.33.215 sport=52722 dport=80 src=192.168.3.17 dst=192.168.33.13 sport=80 dport=52722 [ASSURED] use=1
I tried to remove it with the following command:
# conntrack -D --orig-src 192.168.33.13
tcp 6 431982 ESTABLISHED src=192.168.33.13 dst=192.168.33.215 sport=52722 dport=80 src=192.168.3.17 dst=192.168.33.13 sport=80 dport=52722 [ASSURED] use=1
conntrack v1.4.3 (conntrack-tools): 1 flow entries have been deleted.
the conntrack session is removed I can see in the following command. But another conntrack session was created with src ip address is the lan address of the removed conntrack
# conntrack -L | grep "33.13"
tcp 6 431993 ESTABLISHED src=192.168.3.17 dst=192.168.33.13 sport=80 dport=52722 src=192.168.33.13 dst=192.168.33.215 sport=52722 dport=80 [ASSURED] use=1
conntrack v1.4.3 (conntrack-tools): 57 flow entries have been shown.
I tried to remove the new conntrack but it keep remaining
# conntrack -D --orig-src 192.168.3.17
# conntrack -L | grep "33.13"
conntrack v1.4.3 (conntrack-tools): 11 flow entries have been shown.
tcp 6 431981 ESTABLISHED src=192.168.3.17 dst=192.168.33.13 sport=80 dport=52722 src=192.168.33.13 dst=192.168.33.215 sport=52722 dport=80 [ASSURED] use=1
What I m missing?
first, if "conntrack -D" command succeed, you can see below Messsage.
conntrack v1.4.4 (conntrack-tools): 1 flow entries have been deleted.
So we guess that track deleltion working was failed.
Why do not conntrack delete track?
Perhaps you are referencing a session you want to delete from a specific skb or track.
if you want to get detail infomation, you try to follow "ctnetlink_del_conntrack " call stack funcion in linux kernel.