Question: Is it possible to set Nginx as a reverse proxy for a database?
These are the flags I have at the moment and I believed that having the --with-stream module was sufficient to use TCP streams to the database. Is this a PLUS feature?
Nginx configuration options:
--prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=%{_libdir}/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-http_auth_request_module --with-threads --with-stream --with-stream_ssl_module --with-http_slice_module --with-mail --with-mail_ssl_module --with-file-aio --with-http_v2_module --with-cc-opt='-g -O2 -fstack-protector-strong -Wformat -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-z,relro -Wl,--as-needed' --with-ipv6
Nginx config
stream {
include conf.d/streams/*.conf;
}
contents of conf.d/streams/upstream.conf
upstream database_server {
least_conn;
keepalive 512;
server 192.168.99.103:32778 max_fails=5 fail_timeout=30s weight=1;
}
Error message from Nginx
2016/02/22 03:54:13 [emerg] 242#242: invalid host in upstream "http://database_server" in /etc/nginx/conf.d/streams/database_server.conf:9
Here's an nginx configuration that worked for me (I'm running inside Docker, so some of these options are to help with that):
worker_processes auto;
daemon off;
error_log stderr info;
events {
worker_connections 1024;
}
stream {
upstream postgres {
server my_postgres:5432;
}
server {
listen 5432 so_keepalive=on;
proxy_pass postgres;
}
}
The key for me was the line listen 5432 so_keepalive=on;, which turns on TCP keepalive. Without that, I could connect but my connection would get reset after a few seconds.
The issue was the "http://database_server"
it is a tcp stream so you need to just proxy_pass database_server
also keep alive is not a directive that goes in a tcp upstream server
Related
Im working in a project aimed to build a public VPN that passes through a Suricata IPS filter. Im using Wireguard VPN,Suricata IPS mode with Nftables.
I achieved to block IPS testing traffic from host (the server) to and from internet; also the VPN is working routing all traffic from clients to internet through the server.
But the problem is that this traffic is not detected by Suricara engine. I cannot find the appropiate nftables rule for this..
I have this nftables.conf file (some filtering ingress rules that I also have for bad traffic is not showed in this sample, for resuming space):
table inet firewall {
# Sets are dictionaries and maps of ports, addresses etc.
# These can then easily be used in the rules.
# Sets can be named whatever you like.
# TCP ports to allow, here we add ssh, http and https.
set tcp_accepted {
# The "inet_service" are for tcp/udp ports and "flags interval" allows to set intervals, see the mosh ports below.
type inet_service; flags interval;
elements = {
22, 8080
}
}
# UDP ports to allow, here we add ports for WireGuard and mosh.
set udp_accepted {
type inet_service; flags interval;
elements = {
19869
}
}
# The first chain, can be named anything you like.
chain incoming {
# This line set what traffic the chain will handle, the priority and default policy.
# The priority comes in when you in another table have a chain set to "hook input" and want to specify in what order they should run.
# Use a semicolon to separate multiple commands on one row.
type filter hook input priority 0; policy drop;
# Drop invalid packets.
ct state invalid drop
# Drop none SYN packets.
tcp flags & (fin|syn|rst|ack) != syn ct state new counter drop
# Limit ping requests.
ip protocol icmp icmp type echo-request limit rate over 1/second burst 5 packets drop
ip6 nexthdr icmpv6 icmpv6 type echo-request limit rate over 1/second burst 5 packets drop
# OBS! Rules with "limit" need to be put before rules accepting "established" connections.
# Allow all incmming established and related traffic.
ct state established,related accept
# Allow loopback.
# Interfaces can by set with "iif" or "iifname" (oif/oifname). If the interface can come and go use "iifname", otherwise use "iif" since it performs better.
iif lo accept
# Allow certain inbound ICMP types (ping, traceroute).
# With these allowed you are a good network citizen.
ip protocol icmp icmp type { destination-unreachable, echo-reply, echo-request, source-quench, time-exceeded } accept
# Without the nd-* ones ipv6 will not work.
ip6 nexthdr icmpv6 icmpv6 type { destination-unreachable, echo-reply, echo-request, nd-neighbor-solicit, nd-router-advert, nd-neighbor-advert, packet-too-big, parameter-problem, time-exceeded } accept
# Allow needed tcp and udp ports.
iifname $wan tcp dport #tcp_accepted ct state new accept
iifname $wan udp dport #udp_accepted ct state new accept
# Allow WireGuard clients to access DNS and services.
iifname $vpn udp dport 53 ct state new accept
iifname $vpn tcp dport #tcp_accepted ct state new accept
iifname $vpn udp dport #udp_accepted ct state new accept
# Allow VPN clients to communicate with each other. (disabled)
# iifname $vpn oifname $vpn ct state new accept
}
chain forwarding {
type filter hook forward priority 0; policy drop;
# Drop invalid packets.
ct state invalid drop
# Forward all established and related traffic.
ct state established,related accept
# Forward WireGuard traffic.
# Allow WireGuard traffic to access the internet via wan.
iifname $vpn oifname $wan ct state new accept
}
chain outgoing {
type filter hook output priority 0; policy drop;
# I believe settings "policy accept" would be the same but I prefer explicit rules.
# Drop invalid packets.
ct state invalid drop
# Allow all other outgoing traffic.
# For some reason ipv6 ICMP needs to be explicitly allowed here.
ip6 nexthdr ipv6-icmp accept
ct state new,established,related accept
}
chain IPS_input {
type filter hook input priority 10; policy drop;
counter queue num 0 bypass
counter drop
}
chain IPS_output {
type filter hook output priority 10; policy drop;
counter queue num 1 bypass
counter drop
}
}
# Separate table for hook pre- and postrouting.
# If using kernel 5.2 or later you can replace "ip" with "inet" to also filter IPv6 traffic.
table inet router {
# With kernel 4.17 or earlier both need to be set even when one is empty.
chain prerouting {
type nat hook prerouting priority -100;
}
chain postrouting {
type nat hook postrouting priority 100;
# Masquerade WireGuard traffic.
# All WireGuard traffic will look like it comes from the servers IP address.
oifname $wan ip saddr $vpn_net masquerade
}
}
Suricata is launched with this (queued):
suricata -D -c /etc/suricata/suricata.yaml -q 0 -q 1
Any idea?
thanks for your time!
I am trying to use this example to connect Nifi to Flink:
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
SiteToSiteClientConfig clientConfig = new SiteToSiteClient.Builder()
.url("http://localhost:8090/nifi")
.portName("Data for Flink")
.requestBatchCount(5)
.buildConfig();
SourceFunction<NiFiDataPacket> nifiSource = new NiFiSource(clientConfig);
DataStream<NiFiDataPacket> streamSource = env.addSource(nifiSource).setParallelism(2);
DataStream<String> dataStream = streamSource.map(new MapFunction<NiFiDataPacket, String>() {
#Override
public String map(NiFiDataPacket value) throws Exception {
return new String(value.getContent(), Charset.defaultCharset());
}
});
dataStream.print();
env.execute();
I am running Nifi as a standalone server with default properties, except these properties:
nifi.remote.input.host=localhost
nifi.remote.input.secure=false
nifi.remote.input.socket.port=8090
nifi.remote.input.http.enabled=true
The call fails each time, with following log in Nifi:
[Site-to-Site Worker Thread-24] o.a.nifi.remote.SocketRemoteSiteListener
Unable to communicate with remote instance null due to
org.apache.nifi.remote.exception.HandshakeException: Handshake
with nifi://localhost:61680 failed because the Magic Header
was not present; closing connection
Nifi version: 1.7.1, Flink version: 1.7.1
After using the nifi-toolkit I removed the custom value of nifi.remote.input.socket.port and then added transportProtocol(SiteToSiteTransportProtocol.HTTP) to my SiteToSiteClientConfig and http://localhost:8080/nifi as the URL.
The reason why I changed the port in the first place is that without specifying the protocol HTTP it will use RAW by default.
And when using the RAW protocol from Flink side, the client cannot create Transaction and prints the following warning:
Unable to refresh Remote Group's peers due to Remote instance of NiFi
is not configured to allow RAW Socket site-to-site communications
That's why I thought it was a port issue
So now with the default config of Nifi, this works as expected:
SiteToSiteClientConfig clientConfig = new SiteToSiteClient.Builder()
.url("http://localhost:8080/nifi")
.portName("portNameAsInNifi")
.transportProtocol(SiteToSiteTransportProtocol.HTTP)
.requestBatchCount(1)
.buildConfig();
I am trying to connect my ESP32 which runs using the ESP-IDF framework to MQTT. I have imported this MQTT library successfully and have set up the configuration to look like this:
static void mqtt_app_start(void)
{
const esp_mqtt_client_config_t mqtt_cfg = {
// .host = "m15.cloudmqtt.com",
.uri = "mqtt://rxarkckf:smNb81Ppfe7T#m15.cloudmqtt.com:10793", // uri in the format (username:password#domain:port)
// .host = "m15.cloudmqtt.com", // config with host, port, user, password seperated
// .port = 10793,
// .username = "rxarkckf",
// .password = "smNb81Ppfe7T",
.event_handle = mqtt_event_handler,
// .user_context = (void *)your_context
};
esp_mqtt_client_handle_t client = esp_mqtt_client_init(&mqtt_cfg);
esp_mqtt_client_start(client);
}
I call mqtt_app_start(); in my app_main function. After uploading the code my ESP-32 doesn't connect to the MQTT broker and outputs this:
␛[0;32mI (12633410) MQTT_CLIENT: Sending MQTT CONNECT message, type: 1, id: 0000␛[0m
␛[0;31mE (12633710) MQTT_CLIENT: Error network response␛[0m
␛[0;32mI (12633710) MQTT_CLIENT: Error MQTT Connected␛[0m
␛[0;32mI (12633710) MQTT_CLIENT: Reconnect after 10000 ms␛[0m
␛[0;32mI (12633710) MQTT_SAMPLE: MQTT_EVENT_DISCONNECTED␛[0m
I have double checked that the values for the host, username, password, and port are all correct. When I look at the logs on the web interface hosted at cloudmqtt.com, I can see this output:
2018-11-17 03:50:53: New connection from 73.94.66.49 on port 10793.
2018-11-17 03:50:53: Invalid protocol "MQIs�" in CONNECT from 73.94.66.49.
2018-11-17 03:50:53: Socket error on client <unknown>, disconnecting.
2018-11-17 03:51:20: New connection from 73.94.66.49 on port 10793.
I had similar experience using mosquitto.
Adding this line to mqtt_config.h made my mqtt working.
#define CONFIG_MQTT_PROTOCOL_311
I think the more correct way to set this configuration is in sdkconfig.h, either manually or using "make menuconfig"
The problem is very simple. The library you are using implements the MQTT 3.1 protocol. The server you are trying to connect to implements the MQTT 3.1.1 protocol or higher.
As specified in the document (https://www.oasis-open.org/committees/download.php/55095/mqtt-diffs-v1.0-wd01.doc):
4.1 Protocol Name
The Protocol Name is present in the variable header of a MQTT CONNECT control packet. The Protocol Name is a UTF-8 encoded
string. In MQTT 3.1 the protocol name is "MQISDP". In MQTT 3.1.1 the
protocol name is represented as "MQTT".
For technical info:
https://mqtt.org/mqtt-specification/
Wanting to test a mongoDB server up/down procedure connected to Node/Mongoose, we found out that Mongoose can sometimes open hundreds of TCP sockets (which is not necessary and potentially blocking for the user who is limited to a certain amount of sockets). This occurs in the following case and environment :
Node supervised with PM2 and MongoDB surevised with daemontools
At normal and clean startup :
$ netstat -alpet | grep mongo
tcp 0 0 *:27017 *:* LISTEN mongo 65910844 22930/mongod
tcp 0 0 localhost.localdomain:27017 localhost.localdomain:54595 ESTABLISHED mongo 6591110422930/mongod
The last "ESTABLISHED" line repeated 5 times since the option (poolSize: 5) is specified in Mongoose ("mongo" is the user running mongod under daemontools)
When we have the Node procedure :
mongoose.connection.on('disconnected', function () {
var options = {server: { auto_reconnect:true, poolSize: 5 ,socketOptions: { connectTimeoutMS: 5000 } }
}
console.log('Mongoose default connection disconnected ' + mongoose.connection.readyState);
mongoose.connect( dbURI, options );
});
and we bring down the MongoDB by daemontools (mongodbdaemon is a simple $mongod command) :
svc -d /service/mongodbdaemon
there is of course no mongod running in the system (tested by the netstat command ) and the web server pages called which are using mongoose announce what is normal :
{"name":"MongoError","message":"topology was destroyed"}
The problem occurrs at this stage. Since the time we bring down MongoDB, Mongoose accumulates all the connect() calls in the 'disconnected' event handler. This means that the longer we wait before bringing up MongoDB, the more TCP connections will be opened.
So bringing up MongoDB by
svc -u /service/mongodbdaemon
gives the following :
$ netstat -alpet | grep mongo | wc -l
850 'ESTABLISHED' TCP connections to mongod !
If we bring down again mongod, the hundreds of connections remain in the TIME_WAIT state until Linux cleans the socket pool.
Questions
Can we check if a MongoDB instance is available before connecting to it ?
Can we configure Mongoose not to accumulate reconnecting() tries every millisecond or so ?
Is there a buffer for pending connection operations (as there is for mongoose.insert[...]) that we can access or clean manually ?
Problem reproductible on a CentOS 6.7 / mongoDB 3.0.6 / mongoose 4.1.8 / node 4.0.0
Edit :
From the official mongoose site where I posted this question after posting it here, I received an answer : "using auto_reconnect : true, on the initial connect() operation (which is set by default) there is no reason to reconnect() in a disconnect event callback".
This is true and it works jute fine, but the question is now why does this happen and how to avoid it (it is serious enough on the Linux system level to be an issue in mongoose).
Thanks !
I use nginx on Debian. So besides the main configuration file /etc/nginx/nginx.conf there are folders /etc/nginx/sites-available/ with the vhost config files and /etc/nginx/sites-enabled/ with the links to active vhosts.
So let me ask my question first. Because the explanation is long and maybe you don't need to read it...
I want to be able to use several vhost templates like this:
server {
listen 80;
server_name ~^(?<project>.+)\.(?<area>.+)\.loc$;
if ($host ~ ^(?<project>.+)\.(?<area>.+)\.loc$) {
set $folder "$area/$project";
}
...
access_log /var/log/nginx/$area/$project.access.log;
error_log /var/log/nginx/error.log;
...
root /var/www/$folder/;
...
}
and to define which vhost is based on which template. Furthermore it would be great to have a possibility to "extend" the template, so that I can add new settings to my vhosts and redefine the settings inherited from the template.
How can I achieve it?
My current vhost file structure looks like this:
/etc/nginx/sites-available contains following files:
default (default vhost) ax-common-vhost (vhost template)
test.sandbox.loc (vhost based on the template ax-common-vhost; it
includes thatwith the include rule) ...and some further ones...
/etc/nginx/sites-available contains following files:
default -> /etc/nginx/sites-available/default test.sandbox.loc ->
/etc/nginx/sites-available/test.sandbox.loc ...and some further
ones...
The template ax-common-vhost defines some options like root folder dynamically, using the server name:
server {
listen 80;
server_name ~^(?<project>.+)\.(?<area>.+)\.loc$;
if ($host ~ ^(?<project>.+)\.(?<area>.+)\.loc$) {
set $folder "$area/$project";
}
...
access_log /var/log/nginx/$area/$project.access.log;
error_log /var/log/nginx/error.log;
...
root /var/www/$folder/;
...
}
So when I want to create a new vhost, I just create a new vhost file and a link to it -- and don't need to copy&paste the settings and to set the paths manually. I just need to follow the convention, that a host %project%.%area%.loc hast to be placed in [my webroot]/%area%/%project%
I thought, it works over the include rule: The server gets a request x.y.loc, looks for a file named so, openes the file, and finally processes the directives in it (so the include directive and the the whole content of the included template).
But it's not so. Nginx seems just to scan the whole folder /etc/nginx/sites-available/ (alphabetically?) and to hold on the first file / server directive the host name in the request equals/maches to.
That means, the include
include /etc/nginx/sites-available/ax-common-vhost;
is not used. Actually, I've removed the include directives from the vhost files -- and nothing has changed!
And it's a problem. Because when I add a new template, e.g. for my Zend Framework projects (with [project root]/public/ as root):
file ax-zf-vhost
server {
listen 80;
server_name ~^(?<project>.+)\.(?<area>.+)\.loc$;
if ($host ~ ^(?<project>.+)\.(?<area>.+)\.loc$) {
set $folder "$area/$project";
}
...
access_log /var/log/nginx/$area/$project.access.log;
error_log /var/log/nginx/error.log;
...
root /var/www/$folder/public/;
...
}
..., it is ignored, since the server doesn't get any information about, that the vhost myzf.sandbox.loc is based on ax-zf-vhost. Instead of this it just loops the /etc/nginx/sites-available/ folder, finds ax-common-vhost, myzf.sandbox.loc maches to the pattern ^(?.+).(?.+).loc$, and nginx uses ax-common-vhost for myzf.sandbox.loc.
How can this problem be solved?
Thanks
OK, it workz now!
I've changed the server_name block of my basic vhost file, so it porecces only the request with the hostnames listed in the directive:
file ax-common-vhost:
server {
listen 80;
server_name test.sandbox.loc foo.sandbox.loc bar.sandbox.loc;
...
}
instead of the generic server_name ~^(?<project>.+)\.(?<area>.+)\.loc$;.
The vhost files test.sandbox.loc, foo.sandbox.loc, bar.sandbox.loc etc. in /etc/nginx/sites-available/ and the links to them in /etc/nginx/sites-enabled/ are not needed anymore. I've created a link /etc/nginx/sites-enabled/ax-common-vhost to /etc/nginx/sites-available/ax-common-vhost instead.
The approach for the second common vhost file is the same and it works.
The first problem is resolved: The settings can be shared by several vhosts / server blocks and I can easily add new vhosts without duplicating the settings.
But: The vhost file with the settings cannot be extended by another file. Is it possible to do this, so that a file B can "extend" a file A, inherit its settings and overwrite only directives/rules? How can I realize that?