Suricata fail to block when using nfqueue mode - c

I'm setting suricata on debian 10 to block expected request with run command as below:
/usr/bin/suricata -c /etc/suricata/suricata.yaml --pidfile /var/run/suricata.pid -q 3 -q 4 -q 5 -D -v --user=logstash
Whenever I receives a request which is matched to be blocked, example:
{"timestamp":"2021-12-16T14:59:09.855634+0000","flow_id":3110969609810,"event_type":"drop","src_ip":"192.168.1.5","dest_ip":"192.168.1.18","proto":"ICMP","icmp_type":8,"icmp_code":0,"drop":{"len":60,"tos":0,"ttl":128,"ipid":29443,"icmp_id":256,"icmp_seq":31241},"alert":{"action":"blocked","gid":1,"signature_id":1000002,"rev":1,"signature":"ICMP connection attempt","category":"","severity":3}}
Suricata will be stopped right after that with this error:
[4585] 16/12/2021 -- 14:59:09 - (respond-reject-libnet11.c:226) <Error> (RejectSendLibnet11L3IPv4ICMP) -- [ERRCODE: SC_ERR_LIBNET_INIT(144)] - libnet_inint failed: libnet_open_raw4(): SOCK_RAW allocation failed: Operation not permitted
[4577] 16/12/2021 -- 14:59:09 - (tm-threads.c:1807) <Error> (TmThreadCheckThreadState) -- [ERRCODE: SC_ERR_FATAL(171)] - thread W-NFQ#5 failed
Output of file capability is:
# getcap /usr/bin/suricata
/usr/bin/suricata = cap_net_admin,cap_net_raw,cap_sys_nice+eip
What could I do to make it work now?

Related

OpenGauss Memory error reporting problem sysbench mass data writing

Operation steps & amp; problem phenomenon
1, sysbench prepare, 100 tables, each table 100 million, 50 concurrency.
Try attempts to lower the concurrent number to 25 or the data volume to 5kw. Eventually, there are various memory reporting errors in the create secondary index link.
parameter:
Physical Server memory: 128GB
gs_guc reload -N all -I all -c "shared_buffers='30GB'"
gs_guc reload -N all -I all -c "max_process_memory='90GB'"
gs_guc reload -N all -I all -c "maintenance_work_mem='10GB'"
Report wrong phenomenon 1:
249FATAL: `sysbench.cmdline.call_command' function failed: ./oltp_common.lua:245: db_bulk_insert_next() failed
FATAL: PQexec() failed: 7 memory is temporarily unavailable
FATAL: failed query was: CREATE INDEX k_56 ON sbtest56(k)
FATAL: `sysbench.cmdline.call_command' function failed: ./oltp_common.lua:253: SQL error, errno = 0, state = 'YY006': memory is temporarily unavailable
Creating table 'sbtest76'...
Inserting 100000000 records into
Report wrong phenomenon 2:
Message from syslogd#testserver at Feb 23 10:19:45 ...
systemd:Caught , cannot fork for core dump: Cannot allocate memory
Report wrong phenomenon 3:
opengauss hitch。
Creating a secondary index on 'sbtest9'...
Segmentation fault (core dumped)
log 3:
could not fork new process for connection: Cannot allocate memory
could not fork new process for connection: Cannot allocate memory

Cause: Command execution failed on the local server with non-zero exit code

Failed to fetch information from target servers
Cause: Command execution failed on the local server with non-zero exit code.
command: /usr/local/psa/bin/ipmanage --xml-info
exit code: 255
stdout: <ipinfo>
<ip name="193.160.214.57">
<state>0</state>
<type>shared</type>
<ip_address>193.160.214.57</ip_address>
<mask>255.255.255.255</mask>
<iface>venet0</iface>
<clients>0</clients>
<hostings>0</hostings>
<ftps>false</ftps>
<publicIp></publicIp>
</ip>
</ipinfo>
stderr: [2019-10-20 21:21:51.133] ERR [util_exec] proc_close() failed ['/usr/local/psa/admin/bin/f2bmng' '--reload'] with exit code [1]
PHP Fatal error: Uncaught PleskUtilException: f2bmng failed: 2019-10-20 21:21:51,115 fail2ban.jailreader [17670]: ERROR No file(s) found for glob /var/log/secure
2019-10-20 21:21:51,115 fail2ban [17670]: ERROR Failed during configuration: Have not found any log file for ssh jail
ERROR:__main__:Command '['/usr/bin/fail2ban-client', 'reload']' returned non-zero exit status 255 in /usr/local/psa/admin/plib/Service/Agent.php:210
Stack trace:
#0 /usr/local/psa/admin/plib/Ip/Ban/Manager.php(490): Service_Agent->execAndGetResponse('f2bmng', Array, '')
#1 /usr/local/psa/admin/plib/Ip/Ban/Manager.php(458): Ip_Ban_Manager->_callUtility('--reload')
#2 /usr/local/psa/admin/plib/Fail2Ban/EventListener.php(123): Ip_Ban_Manager->reload()
#3 [internal function]: Plesk\Fail2Ban\EventListener->applyChanges()
#4 {main}
thrown in /usr/local/psa/admin/plib/Service/Agent.php on line 210
That is a critical error, migration was stopped.
I don't know what is "wrong" with your plesk (not so familiar with), but fail2ban error is pretty simply:
ERROR No file(s) found for glob /var/log/secure
2019-10-20 21:21:51,115 fail2ban [17670]: ERROR Failed during configuration: Have not found any log file for ssh jail
Your ssh jail seems to be configured to monitor /var/log/secure which is not exist. Either you have to specify proper logpath (/var/log/auth.log?) where ssh logs authentication errors;
or if it is systemd journal on your system, you have to specify backend = systemd for that.
Related fail2ban jail.local would be:
[ssh]
# backend = systemd
logpath = /var/log/auth.log
But you can surely configure this in plesk settings too.
Also note your jail is called ssh, where normally original default jail of fail2ban is sshd (but it could be indeed configured with this name from your maintainer).

snmptrapd logging error- couldn't open udp:162 -- errno 98 ("Address already in use")

I am trying to receive a trap generated by a cisco router on my VM- Ubuntu 14.04. I can do a snmwalk so I guess snmp is working fine but I am not able to receive the traps generated by router on my VM.
a#ubuntu:~$ sudo /etc/init.d/snmpd restart
* Restarting network management services:
a#ubuntu:~$ sudo /etc/init.d/snmpd status
* snmpd is running
* snmptrapd is running
Here is what I have inside files-
/etc/default/snmpd-
export MIBS=
SNMPDRUN=yes
SNMPDOPTS='-Lsd -Lf /dev/null -u snmp -I -smux -p /var/run/snmpd.pid -c /etc/snmp/snmpd.conf'
TRAPDRUN=yes
# snmptrapd options (use syslog).
TRAPDOPTS='-n -On -t -Lsd -p /var/run/snmptrapd.pid'
/etc/snmp/-
snmpd.conf-
rocommunity public
snmptrapd.conf-
disableAuthorization yes
snmp.conf-
mibs:
The command I am running for viewing the traps on VM-
a#ubuntu:/etc/snmp$ sudo snmptrapd -f -Lo -c snmptrapd.conf
couldn't open udp:162 -- errno 98 ("Address already in use")
I am confused since the port is being used by snmptrap itself-
a#ubuntu:~$ cat /etc/services|grep 162
snmp-trap 162/tcp snmptrap # Traps for SNMP
snmp-trap 162/udp snmptrap
a#ubuntu:~$ sudo netstat -lnp| grep 162
udp 0 0 0.0.0.0:162 0.0.0.0:* 6216/snmptrapd
a#ubuntu:~$ ps -ef | grep snmptrapd
root 6216 2076 0 10:43 ? 00:00:00 /usr/sbin/snmptrapd -Lsd -p /var/run/snmptrapd.pid
a 6493 2667 0 11:47 pts/8 00:00:00 grep --color=auto snmptrapd
Generating a trap from windows using SnmpTrapGen.exe leads to the same error.
Is there any way of solving this issue? I have googled a lot and stuck on this for days, any help will be very much appreciated.
Thanks a lot in advance!!
Port 162 can listen only with an application. If you get this error , you have an app already running which listens port 162 , those can be snmptrapd service or your own application for snmp traps. You should close one of the applications.

OpenMPI bind() failed on error Address already in use (48) Mac OS X

I have installed OpenMPI and tried to compile/execute one of the examples delivered with the newest version.
As I try to run with mpiexec it says that the address is already in use.
Someone got a hint why this is always happening?
Kristians-MacBook-Pro:examples kristian$ mpicc -o hello hello_c.c
Kristians-MacBook-Pro:examples kristian$ mpiexec -n 4 ./hello
[Kristians-MacBook-Pro.local:02747] [[56076,0],0] bind() failed on error Address already in use (48)
[Kristians-MacBook-Pro.local:02747] [[56076,0],0] ORTE_ERROR_LOG: Error in file oob_usock_component.c at line 228
[Kristians-MacBook-Pro.local:02748] [[56076,1],0] usock_peer_send_blocking: send() to socket 19 failed: Socket is not connected (57)
[Kristians-MacBook-Pro.local:02748] [[56076,1],0] ORTE_ERROR_LOG: Unreachable in file oob_usock_connection.c at line 315
[Kristians-MacBook-Pro.local:02748] [[56076,1],0] orte_usock_peer_try_connect: usock_peer_send_connect_ack to proc [[56076,0],0] failed: Unreachable (-12)
[Kristians-MacBook-Pro.local:02749] [[56076,1],1] usock_peer_send_blocking: send() to socket 20 failed: Socket is not connected (57)
[Kristians-MacBook-Pro.local:02749] [[56076,1],1] ORTE_ERROR_LOG: Unreachable in file oob_usock_connection.c at line 315
[Kristians-MacBook-Pro.local:02749] [[56076,1],1] orte_usock_peer_try_connect: usock_peer_send_connect_ack to proc [[56076,0],0] failed: Unreachable (-12)
-------------------------------------------------------
Primary job terminated normally, but 1 process returned
a non-zero exit code.. Per user-direction, the job has been aborted.
-------------------------------------------------------
--------------------------------------------------------------------------
mpiexec detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:
Process name: [[56076,1],0]
Exit code: 1
--------------------------------------------------------------------------
Thanks in advance.
Okay.
I have now changed the $TMPDIR environment variable with export TMPDIR=/tmp and it works.
Now it seems to me that the OpenMPI Session folder was blocking my communication. But why did it?
Am I missing something here?

error while trying to run MPI program with username

When I run program via:
myshell$] mpirun --hosts localhost,192.168.1.4 ./a.out
the program executes successfully. Now when I try to run:
myshell$] mpirun --hosts localhost,myac#192.168.1.4 ./a.out
openssh prompts for password. I get:
Fatal error in MPI_Init: Other MPI error, error stack:
MPIR_Init_thread(433)..............:
MPID_Init(176).....................: channel initialization failed
MPIDI_CH3_Init(70).................:
MPID_nem_init(286).................:
MPID_nem_tcp_init(108).............:
MPID_nem_tcp_get_business_card(354):
MPID_nem_tcp_init(313).............: gethostbyname failed, myac#192.168.1.4 (errno 1)
===================================================================================
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= EXIT CODE: 1
= CLEANING UP REMAINING PROCESSES
= YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
===================================================================================
[proxy:0:0#myac] HYD_pmcd_pmip_control_cmd_cb (./pm/pmiserv/pmip_cb.c:886): assert (!closed) failed
[proxy:0:0#myac] HYDT_dmxu_poll_wait_for_event (./tools/demux/demux_poll.c:77): callback returned error status
[proxy:0:0#myac] main (./pm/pmiserv/pmip.c:206): demux engine error waiting for event
[mpiexec#myac] HYDT_bscu_wait_for_completion (./tools/bootstrap/utils/bscu_wait.c:76): one of the processes terminated badly; aborting
[mpiexec#myac] HYDT_bsci_wait_for_completion (./tools/bootstrap/src/bsci_wait.c:23): launcher returned error waiting for completion
[mpiexec#myac] HYD_pmci_wait_for_completion (./pm/pmiserv/pmiserv_pmci.c:217): launcher returned error waiting for completion
[mpiexec#myac] main (./ui/mpich/mpiexec.c:331): process manager error waiting for completion
Why am I getting error when I am providing the username?
You could try specifying a username in your ssh config file (http://www.cyberciti.biz/faq/create-ssh-config-file-on-linux-unix/) instead of on the mpirun command line. That way perhaps mpirun would not be confused by the extra username part, which as far as I can see from the documentation it does not support. But ssh could, behind the scenes, use the username you specify in your ssh config file. And of course you'll want to set up SSH keys so you don't have to type a password.
I don't believe MPICH supports providing usernames in the --hosts value on the command line. You should try the host file based method described on the wiki. http://wiki.mpich.org/mpich/index.php/Using_the_Hydra_Process_Manager#Using_Hydra_on_Machines_with_Different_User_Names
For example:
shell$ cat hosts
donner user=foo
foo user=bar
shakey user=bar
terra user=foo

Resources