Trying Wireguard + Suricata + Nftables IPS project, some problems - wireguard

Im working in a project aimed to build a public VPN that passes through a Suricata IPS filter. Im using Wireguard VPN,Suricata IPS mode with Nftables.
I achieved to block IPS testing traffic from host (the server) to and from internet; also the VPN is working routing all traffic from clients to internet through the server.
But the problem is that this traffic is not detected by Suricara engine. I cannot find the appropiate nftables rule for this..
I have this nftables.conf file (some filtering ingress rules that I also have for bad traffic is not showed in this sample, for resuming space):
table inet firewall {
# Sets are dictionaries and maps of ports, addresses etc.
# These can then easily be used in the rules.
# Sets can be named whatever you like.
# TCP ports to allow, here we add ssh, http and https.
set tcp_accepted {
# The "inet_service" are for tcp/udp ports and "flags interval" allows to set intervals, see the mosh ports below.
type inet_service; flags interval;
elements = {
22, 8080
}
}
# UDP ports to allow, here we add ports for WireGuard and mosh.
set udp_accepted {
type inet_service; flags interval;
elements = {
19869
}
}
# The first chain, can be named anything you like.
chain incoming {
# This line set what traffic the chain will handle, the priority and default policy.
# The priority comes in when you in another table have a chain set to "hook input" and want to specify in what order they should run.
# Use a semicolon to separate multiple commands on one row.
type filter hook input priority 0; policy drop;
# Drop invalid packets.
ct state invalid drop
# Drop none SYN packets.
tcp flags & (fin|syn|rst|ack) != syn ct state new counter drop
# Limit ping requests.
ip protocol icmp icmp type echo-request limit rate over 1/second burst 5 packets drop
ip6 nexthdr icmpv6 icmpv6 type echo-request limit rate over 1/second burst 5 packets drop
# OBS! Rules with "limit" need to be put before rules accepting "established" connections.
# Allow all incmming established and related traffic.
ct state established,related accept
# Allow loopback.
# Interfaces can by set with "iif" or "iifname" (oif/oifname). If the interface can come and go use "iifname", otherwise use "iif" since it performs better.
iif lo accept
# Allow certain inbound ICMP types (ping, traceroute).
# With these allowed you are a good network citizen.
ip protocol icmp icmp type { destination-unreachable, echo-reply, echo-request, source-quench, time-exceeded } accept
# Without the nd-* ones ipv6 will not work.
ip6 nexthdr icmpv6 icmpv6 type { destination-unreachable, echo-reply, echo-request, nd-neighbor-solicit, nd-router-advert, nd-neighbor-advert, packet-too-big, parameter-problem, time-exceeded } accept
# Allow needed tcp and udp ports.
iifname $wan tcp dport #tcp_accepted ct state new accept
iifname $wan udp dport #udp_accepted ct state new accept
# Allow WireGuard clients to access DNS and services.
iifname $vpn udp dport 53 ct state new accept
iifname $vpn tcp dport #tcp_accepted ct state new accept
iifname $vpn udp dport #udp_accepted ct state new accept
# Allow VPN clients to communicate with each other. (disabled)
# iifname $vpn oifname $vpn ct state new accept
}
chain forwarding {
type filter hook forward priority 0; policy drop;
# Drop invalid packets.
ct state invalid drop
# Forward all established and related traffic.
ct state established,related accept
# Forward WireGuard traffic.
# Allow WireGuard traffic to access the internet via wan.
iifname $vpn oifname $wan ct state new accept
}
chain outgoing {
type filter hook output priority 0; policy drop;
# I believe settings "policy accept" would be the same but I prefer explicit rules.
# Drop invalid packets.
ct state invalid drop
# Allow all other outgoing traffic.
# For some reason ipv6 ICMP needs to be explicitly allowed here.
ip6 nexthdr ipv6-icmp accept
ct state new,established,related accept
}
chain IPS_input {
type filter hook input priority 10; policy drop;
counter queue num 0 bypass
counter drop
}
chain IPS_output {
type filter hook output priority 10; policy drop;
counter queue num 1 bypass
counter drop
}
}
# Separate table for hook pre- and postrouting.
# If using kernel 5.2 or later you can replace "ip" with "inet" to also filter IPv6 traffic.
table inet router {
# With kernel 4.17 or earlier both need to be set even when one is empty.
chain prerouting {
type nat hook prerouting priority -100;
}
chain postrouting {
type nat hook postrouting priority 100;
# Masquerade WireGuard traffic.
# All WireGuard traffic will look like it comes from the servers IP address.
oifname $wan ip saddr $vpn_net masquerade
}
}
Suricata is launched with this (queued):
suricata -D -c /etc/suricata/suricata.yaml -q 0 -q 1
Any idea?
thanks for your time!

Related

Cannot assign requested address with YugabyteDB and Docker Volume

Problem when using yugabyte with persistence volume in docker.
On first run everything work fine, but when re-create container with existing volume, it fail to start:
master.err :
./../src/yb/master/master_main.cc:131] Network error (yb/util/net/socket.cc:325): Error binding socket to 172.28.0.3:7100: Cannot assign requested address (system error 99)
# 0x2938618 google::LogMessage::SendToLog()
# 0x29394d3 google::LogMessage::Flush()
# 0x29399cf google::LogMessageFatal::~LogMessageFatal()
# 0x2677cde main
# 0x7fb112f46825 __libc_start_main
# 0x260802e _start (edited)
There is a yugabyted.conf in yb-data/conf with the ip is written there
When we re-create container the container will get new ip but the ip in yugabyted.conf is old ip address of container
...
"advertise_address": "172.28.0.3",
...
When starting with yugabyted, the directory set by --base_dir holds the configuration in conf/yugabyted.conf, and the data directory in data which is set by --fs_data_dirs when starting yb-master and yb-tserver.
If you want the data directory in the volume but not the configuration, you can set it with:
--tserver_flags=fs_data_dirs=/ybdata --master_flags=fs_data_dirs=/ybdata
and leave --base_dir within the container
Another possibility if you want the configuration in the external volume is to use a configuration that is not dependent on container addresses, like with:
--advertise_address=0.0.0.0
which will listen on all interfaces

What is a proper HTTP status code that server returns when it limits total number of connections?

I have made a simple HTTP server that listens to socket connections. The server code limits total number of connections that it can hold simultaneously.
So, I have these lines:
do {
new_fd = accept(lfd, NULL, NULL);
nfds += 1;
...
if(nfds + 1 > ntotal){ // connection limit exceeded
set_headers( new_fd, /* HTTP status code here */ );
/* close socket after error had been sent */
}
}while(1);
In this situation I'm interested with HTTP status code that server should send before closing socket.
From this link, it appears 503 is the appropriate HTTP status code to send for an overloaded server.
10.5.4 503 Service Unavailable
The server is currently unable to handle the request due to a
temporary overloading or maintenance of the server. The implication is
that this is a temporary condition which will be alleviated after some
delay. If known, the length of the delay MAY be indicated in a
Retry-After header. If no Retry-After is given, the client SHOULD
handle the response as it would for a 500 response.
Note: The existence of the 503 status code does not imply that a
server must use it when becoming overloaded. Some servers may wish
to simply refuse the connection.
(bold emphasis mine)

TCP proxy to postgres database as an upstream server in nginx

Question: Is it possible to set Nginx as a reverse proxy for a database?
These are the flags I have at the moment and I believed that having the --with-stream module was sufficient to use TCP streams to the database. Is this a PLUS feature?
Nginx configuration options:
--prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=%{_libdir}/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-http_auth_request_module --with-threads --with-stream --with-stream_ssl_module --with-http_slice_module --with-mail --with-mail_ssl_module --with-file-aio --with-http_v2_module --with-cc-opt='-g -O2 -fstack-protector-strong -Wformat -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-z,relro -Wl,--as-needed' --with-ipv6
Nginx config
stream {
include conf.d/streams/*.conf;
}
contents of conf.d/streams/upstream.conf
upstream database_server {
least_conn;
keepalive 512;
server 192.168.99.103:32778 max_fails=5 fail_timeout=30s weight=1;
}
Error message from Nginx
2016/02/22 03:54:13 [emerg] 242#242: invalid host in upstream "http://database_server" in /etc/nginx/conf.d/streams/database_server.conf:9
Here's an nginx configuration that worked for me (I'm running inside Docker, so some of these options are to help with that):
worker_processes auto;
daemon off;
error_log stderr info;
events {
worker_connections 1024;
}
stream {
upstream postgres {
server my_postgres:5432;
}
server {
listen 5432 so_keepalive=on;
proxy_pass postgres;
}
}
The key for me was the line listen 5432 so_keepalive=on;, which turns on TCP keepalive. Without that, I could connect but my connection would get reset after a few seconds.
The issue was the "http://database_server"
it is a tcp stream so you need to just proxy_pass database_server
also keep alive is not a directive that goes in a tcp upstream server

Mongoose opening multiple unwanted TCP sockets on reconnect

Wanting to test a mongoDB server up/down procedure connected to Node/Mongoose, we found out that Mongoose can sometimes open hundreds of TCP sockets (which is not necessary and potentially blocking for the user who is limited to a certain amount of sockets). This occurs in the following case and environment :
Node supervised with PM2 and MongoDB surevised with daemontools
At normal and clean startup :
$ netstat -alpet | grep mongo
tcp 0 0 *:27017 *:* LISTEN mongo 65910844 22930/mongod
tcp 0 0 localhost.localdomain:27017 localhost.localdomain:54595 ESTABLISHED mongo 6591110422930/mongod
The last "ESTABLISHED" line repeated 5 times since the option (poolSize: 5) is specified in Mongoose ("mongo" is the user running mongod under daemontools)
When we have the Node procedure :
mongoose.connection.on('disconnected', function () {
var options = {server: { auto_reconnect:true, poolSize: 5 ,socketOptions: { connectTimeoutMS: 5000 } }
}
console.log('Mongoose default connection disconnected ' + mongoose.connection.readyState);
mongoose.connect( dbURI, options );
});
and we bring down the MongoDB by daemontools (mongodbdaemon is a simple $mongod command) :
svc -d /service/mongodbdaemon
there is of course no mongod running in the system (tested by the netstat command ) and the web server pages called which are using mongoose announce what is normal :
{"name":"MongoError","message":"topology was destroyed"}
The problem occurrs at this stage. Since the time we bring down MongoDB, Mongoose accumulates all the connect() calls in the 'disconnected' event handler. This means that the longer we wait before bringing up MongoDB, the more TCP connections will be opened.
So bringing up MongoDB by
svc -u /service/mongodbdaemon
gives the following :
$ netstat -alpet | grep mongo | wc -l
850 'ESTABLISHED' TCP connections to mongod !
If we bring down again mongod, the hundreds of connections remain in the TIME_WAIT state until Linux cleans the socket pool.
Questions
Can we check if a MongoDB instance is available before connecting to it ?
Can we configure Mongoose not to accumulate reconnecting() tries every millisecond or so ?
Is there a buffer for pending connection operations (as there is for mongoose.insert[...]) that we can access or clean manually ?
Problem reproductible on a CentOS 6.7 / mongoDB 3.0.6 / mongoose 4.1.8 / node 4.0.0
Edit :
From the official mongoose site where I posted this question after posting it here, I received an answer : "using auto_reconnect : true, on the initial connect() operation (which is set by default) there is no reason to reconnect() in a disconnect event callback".
This is true and it works jute fine, but the question is now why does this happen and how to avoid it (it is serious enough on the Linux system level to be an issue in mongoose).
Thanks !

SSL Cipher help in C

I am trying to Use SSL on top of tcp/ip to send an HTTPS request to a site using C. I have no access to curl or other standard libraries. Pretend like i can't load any libraries at all.
I need to set an SSL Profile Cipher. When I successfully use curl on my linux box to talk with the server I see: SSL Connection using ECDHE-RSA-AES128-SHA
If my options for setting the cipher are:
SSL_kRSA (RSA Key Exchange)
SSL_kEDH (tmp DH key no DH cert)
SSL_aRSA (Authenticate with RSA)
SSL-aDSS (Authenticate with DSS)
SSL_DES (DES)
SSL_3DES (3DES)
SSL_RC4 (RC4)
SSL_RC2 (RC2)
SSL_AES (AES)
SSL_MD5 (MD5)
SSL_SHA1 (SHA1)
SSL_SHA256 (SHA256)
SSL_SHA384 (SHA384)
SSL_RSA ([SSL_kRSA|SSL_aRSA] RSA)
SSL_DSS ([SSL_aDSS] Authenticate with DSS)
I can set multiple things by something like:
SSL_RSA | SSL_AES
Protocol is TLSv1.2
What should my cipher look like?
"Pretend like i can't load any libraries at all." If that is true, you will need to implement the cipher itself plus the SSL handling layer ^_^.
Assuming you are using OpenSSL and have TCP established with socket_fd, you need to create a SSL_CTX with SSL_CTX_new (SSLv23_client_method()). Normally, to set the cipher list, you use SSL_CTX_set_cipher_list(ctx, "HIGH:!aNULL:!eNULL:#STRENGTH"), see http://openssl.org/docs/apps/ciphers.html for all available options, you may specific a particular cipher.
Then create a SSL session with SSL_new(ctx) and SSL_set_fd (ssl, socket_fd), after that use SSL_connect(...), SSL_read(...)/SSL_write(...) to communicate with server.
After all have been done, SSL_shutdown(...) and SSL_Free(...), SSL_CTX_Free(...).

Resources