Keepalived - VIP on device different from one where VRRP instance configured - keepalived

I have 2 VMs with Linux and keepalived installed. Their hostnames are master and slave. Each VM has 2 network interfaces configured for different subnets:
master:
eth1 - 192.168.1.101/24
eth2 - 192.168.56.101/24
slave:
eth1 - 192.168.1.102/24
eth2 - 192.168.56.102/24
On each node I configured one vrrp_instance using interface eth1:
vrrp_instance VI_1 {
...
interface eth1
...
}
And I assigned one VIP for each subnet - one per interface:
vrrp_instance VI_1 {
...
virtual_ipaddress {
192.168.1.250/32 dev eth1 label eth1:vip0
192.168.56.250/32 dev eth2 label eth2:vip0
}
...
}
So, whole configs are:
master:
vrrp_instance VI_1 {
state MASTER
interface eth1
virtual_router_id 1
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass HURRDURR
}
virtual_ipaddress {
192.168.1.250/32 dev eth1 label eth1:vip0
192.168.56.250/32 dev eth2 label eth2:vip0
}
}
slave:
vrrp_instance VI_1 {
state BACKUP
interface eth1
virtual_router_id 1
priority 99
advert_int 1
authentication {
auth_type PASS
auth_pass HURRDURR
}
virtual_ipaddress {
192.168.1.250/32 dev eth1 label eth1:vip0
192.168.56.250/32 dev eth2 label eth2:vip0
}
}
A question: could someone please tell me if there are pitfalls with a similar setup (on condition that VRRP multicast is allowed for the interface specified in option interface <interface name>).
As far as I understood, option interface <interface name> is used only for intercommunication between keepalived instances, and in fact, it specified which interface keepalived will use to send multicast traffic to negotiate which one should be a leader at the moment. And it should not affect configured VIPs (on condition that I configured them properly).

I realized at least one pitfall of a similar configuration. In case of network problems with interface eth2 on master server, VIP assigned on eth1 will not be moved to slave because VRRP instance configured via network on eth1 of both servers.
Therefore I think that a similar configuration is not recommended. VIP should be assigned to the same interface where VRRP instance was configured.
Correct configuration:
master:
vrrp_instance VI_1 {
state MASTER
interface eth1
virtual_router_id 1
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass HURRDURR
}
virtual_ipaddress {
192.168.1.250/32 dev eth1 label eth1:vip0
}
}
vrrp_instance VI_2 {
state MASTER
interface eth2
virtual_router_id 2
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass HURRDURR
}
virtual_ipaddress {
192.168.56.250/32 dev eth2 label eth2:vip0
}
}
slave:
vrrp_instance VI_1 {
state BACKUP
interface eth1
virtual_router_id 1
priority 99
advert_int 1
authentication {
auth_type PASS
auth_pass HURRDURR
}
virtual_ipaddress {
192.168.1.250/32 dev eth1 label eth1:vip0
}
}
vrrp_instance VI_2 {
state BACKUP
interface eth2
virtual_router_id 2
priority 99
advert_int 1
authentication {
auth_type PASS
auth_pass HURRDURR
}
virtual_ipaddress {
192.168.56.250/32 dev eth2 label eth2:vip0
}
}

Related

Trying Wireguard + Suricata + Nftables IPS project, some problems

Im working in a project aimed to build a public VPN that passes through a Suricata IPS filter. Im using Wireguard VPN,Suricata IPS mode with Nftables.
I achieved to block IPS testing traffic from host (the server) to and from internet; also the VPN is working routing all traffic from clients to internet through the server.
But the problem is that this traffic is not detected by Suricara engine. I cannot find the appropiate nftables rule for this..
I have this nftables.conf file (some filtering ingress rules that I also have for bad traffic is not showed in this sample, for resuming space):
table inet firewall {
# Sets are dictionaries and maps of ports, addresses etc.
# These can then easily be used in the rules.
# Sets can be named whatever you like.
# TCP ports to allow, here we add ssh, http and https.
set tcp_accepted {
# The "inet_service" are for tcp/udp ports and "flags interval" allows to set intervals, see the mosh ports below.
type inet_service; flags interval;
elements = {
22, 8080
}
}
# UDP ports to allow, here we add ports for WireGuard and mosh.
set udp_accepted {
type inet_service; flags interval;
elements = {
19869
}
}
# The first chain, can be named anything you like.
chain incoming {
# This line set what traffic the chain will handle, the priority and default policy.
# The priority comes in when you in another table have a chain set to "hook input" and want to specify in what order they should run.
# Use a semicolon to separate multiple commands on one row.
type filter hook input priority 0; policy drop;
# Drop invalid packets.
ct state invalid drop
# Drop none SYN packets.
tcp flags & (fin|syn|rst|ack) != syn ct state new counter drop
# Limit ping requests.
ip protocol icmp icmp type echo-request limit rate over 1/second burst 5 packets drop
ip6 nexthdr icmpv6 icmpv6 type echo-request limit rate over 1/second burst 5 packets drop
# OBS! Rules with "limit" need to be put before rules accepting "established" connections.
# Allow all incmming established and related traffic.
ct state established,related accept
# Allow loopback.
# Interfaces can by set with "iif" or "iifname" (oif/oifname). If the interface can come and go use "iifname", otherwise use "iif" since it performs better.
iif lo accept
# Allow certain inbound ICMP types (ping, traceroute).
# With these allowed you are a good network citizen.
ip protocol icmp icmp type { destination-unreachable, echo-reply, echo-request, source-quench, time-exceeded } accept
# Without the nd-* ones ipv6 will not work.
ip6 nexthdr icmpv6 icmpv6 type { destination-unreachable, echo-reply, echo-request, nd-neighbor-solicit, nd-router-advert, nd-neighbor-advert, packet-too-big, parameter-problem, time-exceeded } accept
# Allow needed tcp and udp ports.
iifname $wan tcp dport #tcp_accepted ct state new accept
iifname $wan udp dport #udp_accepted ct state new accept
# Allow WireGuard clients to access DNS and services.
iifname $vpn udp dport 53 ct state new accept
iifname $vpn tcp dport #tcp_accepted ct state new accept
iifname $vpn udp dport #udp_accepted ct state new accept
# Allow VPN clients to communicate with each other. (disabled)
# iifname $vpn oifname $vpn ct state new accept
}
chain forwarding {
type filter hook forward priority 0; policy drop;
# Drop invalid packets.
ct state invalid drop
# Forward all established and related traffic.
ct state established,related accept
# Forward WireGuard traffic.
# Allow WireGuard traffic to access the internet via wan.
iifname $vpn oifname $wan ct state new accept
}
chain outgoing {
type filter hook output priority 0; policy drop;
# I believe settings "policy accept" would be the same but I prefer explicit rules.
# Drop invalid packets.
ct state invalid drop
# Allow all other outgoing traffic.
# For some reason ipv6 ICMP needs to be explicitly allowed here.
ip6 nexthdr ipv6-icmp accept
ct state new,established,related accept
}
chain IPS_input {
type filter hook input priority 10; policy drop;
counter queue num 0 bypass
counter drop
}
chain IPS_output {
type filter hook output priority 10; policy drop;
counter queue num 1 bypass
counter drop
}
}
# Separate table for hook pre- and postrouting.
# If using kernel 5.2 or later you can replace "ip" with "inet" to also filter IPv6 traffic.
table inet router {
# With kernel 4.17 or earlier both need to be set even when one is empty.
chain prerouting {
type nat hook prerouting priority -100;
}
chain postrouting {
type nat hook postrouting priority 100;
# Masquerade WireGuard traffic.
# All WireGuard traffic will look like it comes from the servers IP address.
oifname $wan ip saddr $vpn_net masquerade
}
}
Suricata is launched with this (queued):
suricata -D -c /etc/suricata/suricata.yaml -q 0 -q 1
Any idea?
thanks for your time!

Nagios doesn't Trigger Continuous Alerts

I have setup Nagios on one of my VM.
I receive the first alert when a service is Critical. But I do not receive the subsequent alert/email.
Host template config
define host {
name host-template
alias Default server template
check_command check_dummy!0!!!!!!!
max_check_attempts 10
check_interval 5
retry_interval 1
check_period 24x7
event_handler notify-host-by-email
event_handler_enabled 1
process_perf_data 1
contacts user1
notification_interval 10
notification_period 24x7
first_notification_delay 0
notification_options d,u,s,
notifications_enabled 1
_LTERM_LOAD_C 10
_LTERM_LOAD_W 5
_USED_MEM_C 30
_USED_MEM_W 20
_USED_SPACE_C 40
_USED_SPACE_W 30
register 0
}
Host config:
define host {
host_name aaaaa
use bbbbb
alias DEV
display_name DEV
address 11.111.111.111
_KEY xx
_SERVERPORT xx:8082
_SERVERPORTLFAT xx:443
_URL xx:8082
_USER test01
register 1
}
notification_interval is enabled but still i don't see any notifications.
I'm unsure if there is anything that is overriding it.
Because of this:
max_check_attempts 10
Nagios will try 10 more times before send you notification. Try to comment it and check again

ESP-32 can't connect to MQTT broker: mqtt_client: Error network response

I am trying to connect my ESP32 which runs using the ESP-IDF framework to MQTT. I have imported this MQTT library successfully and have set up the configuration to look like this:
static void mqtt_app_start(void)
{
const esp_mqtt_client_config_t mqtt_cfg = {
// .host = "m15.cloudmqtt.com",
.uri = "mqtt://rxarkckf:smNb81Ppfe7T#m15.cloudmqtt.com:10793", // uri in the format (username:password#domain:port)
// .host = "m15.cloudmqtt.com", // config with host, port, user, password seperated
// .port = 10793,
// .username = "rxarkckf",
// .password = "smNb81Ppfe7T",
.event_handle = mqtt_event_handler,
// .user_context = (void *)your_context
};
esp_mqtt_client_handle_t client = esp_mqtt_client_init(&mqtt_cfg);
esp_mqtt_client_start(client);
}
I call mqtt_app_start(); in my app_main function. After uploading the code my ESP-32 doesn't connect to the MQTT broker and outputs this:
␛[0;32mI (12633410) MQTT_CLIENT: Sending MQTT CONNECT message, type: 1, id: 0000␛[0m
␛[0;31mE (12633710) MQTT_CLIENT: Error network response␛[0m
␛[0;32mI (12633710) MQTT_CLIENT: Error MQTT Connected␛[0m
␛[0;32mI (12633710) MQTT_CLIENT: Reconnect after 10000 ms␛[0m
␛[0;32mI (12633710) MQTT_SAMPLE: MQTT_EVENT_DISCONNECTED␛[0m
I have double checked that the values for the host, username, password, and port are all correct. When I look at the logs on the web interface hosted at cloudmqtt.com, I can see this output:
2018-11-17 03:50:53: New connection from 73.94.66.49 on port 10793.
2018-11-17 03:50:53: Invalid protocol "MQIs�" in CONNECT from 73.94.66.49.
2018-11-17 03:50:53: Socket error on client <unknown>, disconnecting.
2018-11-17 03:51:20: New connection from 73.94.66.49 on port 10793.
I had similar experience using mosquitto.
Adding this line to mqtt_config.h made my mqtt working.
#define CONFIG_MQTT_PROTOCOL_311
I think the more correct way to set this configuration is in sdkconfig.h, either manually or using "make menuconfig"
The problem is very simple. The library you are using implements the MQTT 3.1 protocol. The server you are trying to connect to implements the MQTT 3.1.1 protocol or higher.
As specified in the document (https://www.oasis-open.org/committees/download.php/55095/mqtt-diffs-v1.0-wd01.doc):
4.1 Protocol Name
The Protocol Name is present in the variable header of a MQTT CONNECT control packet. The Protocol Name is a UTF-8 encoded
string. In MQTT 3.1 the protocol name is "MQISDP". In MQTT 3.1.1 the
protocol name is represented as "MQTT".
For technical info:
https://mqtt.org/mqtt-specification/

TCP proxy to postgres database as an upstream server in nginx

Question: Is it possible to set Nginx as a reverse proxy for a database?
These are the flags I have at the moment and I believed that having the --with-stream module was sufficient to use TCP streams to the database. Is this a PLUS feature?
Nginx configuration options:
--prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=%{_libdir}/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-http_auth_request_module --with-threads --with-stream --with-stream_ssl_module --with-http_slice_module --with-mail --with-mail_ssl_module --with-file-aio --with-http_v2_module --with-cc-opt='-g -O2 -fstack-protector-strong -Wformat -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-z,relro -Wl,--as-needed' --with-ipv6
Nginx config
stream {
include conf.d/streams/*.conf;
}
contents of conf.d/streams/upstream.conf
upstream database_server {
least_conn;
keepalive 512;
server 192.168.99.103:32778 max_fails=5 fail_timeout=30s weight=1;
}
Error message from Nginx
2016/02/22 03:54:13 [emerg] 242#242: invalid host in upstream "http://database_server" in /etc/nginx/conf.d/streams/database_server.conf:9
Here's an nginx configuration that worked for me (I'm running inside Docker, so some of these options are to help with that):
worker_processes auto;
daemon off;
error_log stderr info;
events {
worker_connections 1024;
}
stream {
upstream postgres {
server my_postgres:5432;
}
server {
listen 5432 so_keepalive=on;
proxy_pass postgres;
}
}
The key for me was the line listen 5432 so_keepalive=on;, which turns on TCP keepalive. Without that, I could connect but my connection would get reset after a few seconds.
The issue was the "http://database_server"
it is a tcp stream so you need to just proxy_pass database_server
also keep alive is not a directive that goes in a tcp upstream server

Oracle XE not binding on IP4 port 1521

I have an Oracle 11g XE installed in Ubuntu 12.4 and facing difficulty with getting the Oracle to bind on a TCP port. The IP6 binding seems to be fine but not the IP4 (tcp 0.0.0.0:1521).
Here is the oracle-xe status:
root#pearBox:~# /etc/init.d/oracle-xe status
LSNRCTL for Linux: Version 11.2.0.2.0 - Production on 06-JUN-2013 15:08:34
Copyright (c) 1991, 2011, Oracle. All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC_FOR_XE)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 11.2.0.2.0 - Production
Start Date 06-JUN-2013 15:06:42
Uptime 0 days 0 hr. 1 min. 52 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Default Service XE
Listener Parameter File /u01/app/oracle/product/11.2.0/xe/network/admin/listener.ora
Listener Log File /u01/app/oracle/diag/tnslsnr/pearBox/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC_FOR_XE)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=pearBox)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=pearBox)(PORT=8080))(Presentation=HTTP)(Session=RAW))
Services Summary...
Service "PLSExtProc" has 1 instance(s).
Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service...
Service "XE" has 1 instance(s).
Instance "XE", status READY, has 1 handler(s) for this service...
Service "XEXDB" has 1 instance(s).
Instance "XE", status READY, has 1 handler(s) for this service...
The command completed successfully
Netstat results:
root#pearBox:~# netstat -ntpl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN 914/mysqld
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 1859/apache2
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 608/sshd
tcp6 0 0 :::22447 :::* LISTEN 1757/xe_d000_XE
tcp6 0 0 :::8080 :::* LISTEN 1655/tnslsnr
tcp6 0 0 :::1521 :::* LISTEN 1655/tnslsnr
tcp6 0 0 :::22 :::* LISTEN 608/sshd
And the listener configuration:
root#pearBox:~# cat /u01/app/oracle/product/11.2.0/xe/network/admin/listener.ora
# listener.ora Network Configuration File:
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME = /u01/app/oracle/product/11.2.0/xe)
(PROGRAM = extproc)
)
)
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC_FOR_XE))
(ADDRESS = (PROTOCOL = TCP)(HOST = pearBox)(PORT = 1521))
)
)
DEFAULT_SERVICE_LISTENER = (XE)
I changed the hostname to "HOST = 127.0.0.1" and it is binding on localhost, but I am not able to access the Oracle instance from the network!
root#pearBox:~# netstat -ntpl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN 914/mysqld
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 1859/apache2
tcp 0 0 127.0.0.1:1521 0.0.0.0:* LISTEN 2339/tnslsnr
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 608/sshd
tcp6 0 0 :::21121 :::* LISTEN 2443/xe_d000_XE
tcp6 0 0 :::22
I would appreciate if you could help to get this issue resolved.
Just found this post, I had the same issue. It was the result of changing my hostname post-installation. I was able to remedy the situation by updating the hostname in both:
/u01/app/oracle/product/11.2.0/xe/network/admin/tnsnames.ora
and
/u01/app/oracle/product/11.2.0/xe/network/admin/listener.ora
I would suggest to take a look on the firewall rules -> https://help.ubuntu.com/12.04/serverguide/firewall.html
Changing the hostname worked for me as I found a mismatch by checking:
uname -a
listener log : log.xml
/etc/hosts
I added the full host name with the domain
Appreciated the help as I've been searching internet posts for a week before I got this reference.

Resources