nodejs modify server to accept only requests from array of IPs - arrays

I've a TCP server which I need to modify, to accept only requests from predefined IPs. My idea was to create an array, containing all IPs which are allowed, but how to do the check and how to put this check around my existing code?
code:
// Load the TCP Library
var net = require('net')
// Start a TCP Server
net.createServer(function (socket) {
socket.setKeepAlive(true)
// TODO: Add mysql connection list entry
console.log('connected', socket.remoteAddress)
socket.on('close', function(err){
if (err) throw err;
// TODO: Add mysql connection list entry
console.log('disconnected', socket.remoteAddress)
})
}).listen(5000);
// Put a friendly message on the terminal of the server.
console.log("Server running at port 5000");

I think this is the wrong tool for the job. You should configure access to the application using the system firewall. Firewalls allow you to:
select ip ranges in a flexible manner
e.g blocking as well as allowing
work with different ip versions
work with different protocols
better integrate into IT infrastructure
However, if you don't have access to the firewall and you need something quick and dirty you could easily kick connections that are not in your list by checking the ip address against a list:
var allow_list = ['10.1.1.1', '10.1.1.2'];
var net = require('net')
net.createServer(function (socket) {
if (allow_list.indexOf(socket.remoteAddress) < 0) {
socket.destroy();
return;
}
socket.setKeepAlive(true)
// do stuff
}).listen(5000);
console.log("Server running at port 5000");

Related

How to set Ipv4 addresses with dbus-python (Hotspot and ethernet)

(fairly new to networking)
I'm trying to setup a small, yet somewhat complicated network settings on my ubuntu 18.04 machine.
The topology of the network: Ubuntu machine (called "the server") will act as the DHCP server for both hotspot and ethernet. connected to the ubuntu machine are 2 ubuntu machine clients and a camera.
I've implemented "the server" with python-dbus library, to set up/down a hotspot connection, which works as intended. but my problem is how to manage the ip addresses and the routing.
i'll elaborate on 2 problems i am facing:
in order to change the ipv4 address for the Hotspot AP, i found out i could edit a file: "/etc/NetworkManager/system-connections/", adding another line: "address1=X.Y.Z.W" (my desired ip address).
but editing the file isn't the proper way for my requirements, i would rather do it from the code itself. which changes do i need to make to the code in order to make the same changes?
this is how the code connection object of dbus looks like:
def get_hotspot_struct(iface, uuid, ssid, password):
s_con = dbus.Dictionary({
'type': '802-11-wireless',
'uuid': uuid,
'id': 'PixellotHotspot',
'interface-name': iface,
})
s_wifi = dbus.Dictionary({
'ssid': dbus.ByteArray(ssid.encode()),
'mode': 'ap',
'band': 'bg',
'channel': dbus.UInt32(1),
})
s_wsec = dbus.Dictionary({
'key-mgmt': 'wpa-psk',
'psk': password,
})
s_ipv4 = dbus.Dictionary({
'method': 'shared',
})
s_ipv6 = dbus.Dictionary({
'method': 'ignore',
})
con = dbus.Dictionary({
'connection': s_con,
'802-11-wireless': s_wifi,
'802-11-wireless-security': s_wsec,
'ipv4': s_ipv4,
'ipv6': s_ipv6,
})
logger.info('Getting hotspot connection template')
logger.info(con)
return con
Can i do the same for ethernet wired connections?
so far what ive figured is that I can edit "/etc/netplan/01-netconf.yaml" in order to set dhcp to false, and se an ip "X.Y.Z.W" (desired) for ethernet interface eth0.
but that seem to only work on the server, when i connect the ubuntu clients with ethernet wire to the server, the server wont give the clients any ip at all.
It does for the hotspot, but not for the ethernet.
I know my problem is very specific and all-over-the-place, but i would appreciate any help. Post here/sendme email/ Facebook me(Yves Halimi) if you have knowledge about this issue. Will compensate help!!
The D-Bus API is documented in man nm-settings-dbus.
To NetworkManager, it's always about creating connection profiles and activating them. So if you have code that can create one profile, another profile works basically the same -- just some keys will be different.
I find it helpful to use one of the other NetworkManager clients, and compare with what they do. For example, you could also just create the profile with nmcli connection add type ..., then get the D-Bus path via nmcli -f all connection show and finally, look at how the profiles looks on D-Bus:
busctl -j call org.freedesktop.NetworkManager /org/freedesktop/NetworkManager/Settings/1 org.freedesktop.NetworkManager.Settings.Connection GetSettings
See examples upstream: python+dbus
Maybe you'll find it easier to use python + pygobject + libnm. In that case, see examples here. The main downside is that you'll have an additional dependency (pygobject). libnm isn't an additional dependency, you'll already have that if you use NetworkManager.

Making HTTPS requests from ESP32

I am making a post request from my ESP32 S2 Kaluga kit.
I have tested the HTTP request while running a server program in my LAN.
I am using
esp_http_client_handle_t and esp_http_client_config_t from
esp_http_client.h to do this.
Now, I have a HTTPS api setup in AWS API gateway. I get following error with https now:
E (148961) esp-tls-mbedtls: No server verification option set in esp_tls_cfg_t structure. Check esp_tls API reference
E (148961) esp-tls-mbedtls: Failed to set client configurations, returned [0x8017] (ESP_ERR_MBEDTLS_SSL_SETUP_FAILED)
E (148971) esp-tls: create_ssl_handle failed
E (148981) esp-tls: Failed to open new connection
E (148981) TRANSPORT_BASE: Failed to open a new connection
E (148991) HTTP_CLIENT: Connection failed, sock < 0
How can I solve this? Thank you
Edit:
Following is the code I use
I create a http client for post request:
esp_err_t client_event_get_handler(esp_http_client_event_handle_t evt)
{
switch (evt->event_id)
{
case HTTP_EVENT_ON_DATA:
printf("HTTP GET EVENT DATA: %s", (char *)evt->data);
break;
default:
break;
}
return ESP_OK;
}
static void post_rest_function( char *payload , int len)
{
esp_http_client_config_t config_post = {
.url = SERVER_URL,
.method = HTTP_METHOD_POST,
.event_handler = client_event_get_handler,
.auth_type = HTTP_AUTH_TYPE_NONE,
.transport_type = HTTP_TRANSPORT_OVER_TCP
};
esp_http_client_handle_t client = esp_http_client_init(&config_post);
esp_http_client_set_post_field(client, payload, len);
esp_http_client_set_header(client, "Content-Type", "image/jpeg");
esp_http_client_perform(client);
esp_http_client_cleanup(client);
}
and I use it in main with an image payload:
void app_main(){
....
post_rest_function( (char *)pic->buf, pic->len);
....
}
You need certificate to make https requests. In case you dont want to implement this, just edit your sdkconfig "Allow potentially insecure options" -> true
"Skip server certificate verification by default" -> true
Careful, this is unsafe.
Additionally, you may choose to include the certificates to make sure that your transfer is safe (valid server).
You can obtain the root SSL certificate of your host like so watch through till 56 minute mark for a complete explanation.
OR you may use the included certificate bundle that espressif provides in the IDF framework, for that:
In your code include #include "esp_crt_bundle.h"
and in your client_config_t add these:
.transport_type = HTTP_TRANSPORT_OVER_SSL, //Specify transport type
.crt_bundle_attach = esp_crt_bundle_attach, //Attach the certificate bundle
after which the process remains quite the same.
The video I linked above is quite helpful, I recommend you watch the whole thing :)

Pick up connection if there is a disconnect

I make use of this specific version: https://github.com/patriksimek/node-mssql/tree/v3.3.0#multiple-connections of the SQL Server npm package.
I have been looking through the documentation of tedious (the underlying lib) and Microsofts documentation (see the github link above).
I couldn't find anything that does something simple like getCurrentConnection, or getConnectionStatus or anything similar.
I had two ways to solve this problem but I'm not happy with both of them so that's why I'm asking here.
My first approach was to set a timeout and let the connect function call itself on each catch(err).
The second one was to handle this in the middleware but then if all is working fine it will make a connection to SQL on every request and closing that connection again.
My middleware function:
api.use(function(err, req, res, next){
sql.close();
sql.connect(config.database).then(() => {
next();
}).catch(function(err) {
sql.close();
server.main();
});
});
I want to, if possible pick up the connection instead of closing and starting a new one with regards to when the server or the database crashes I still have some data from the existing function.
By the help of Arnold I got to understand the mssql package and it's inner workings a lot better.
Therefor I came up with the following solution to my problem.
let intervalFunction;
const INTERVAL_DURATION = 4000;
if (require.main === module){
console.log("Listening on http://localhost:" + config.port + " ...");
app.listen(config.port);
// try to connect to db and fire main on succes.
intervalFunction = setInterval(()=> getConnection(), INTERVAL_DURATION);
}
function getConnection() {
sql.close();
sql.connect(config.database).then(() => {
sql.close();
clearInterval(intervalFunction);
main();
}).catch(function(err) {
console.error(err);
console.log(`DB connection will be tried again in ${INTERVAL_DURATION}ms`)
sql.close();
});
}
Once the initial connection has been made but it got lost in the meantime the pool will pick up the connection automatically and handle your connections
If I understood you correctly, you basically want to reuse connections. Tedious has built-in connection pooling, so you don't have to worry about re-using them:
var config = {
user: '...',
password: '...',
server: 'localhost',
database: '...',
pool: {
max: 10,
min: 0,
idleTimeoutMillis: 30000
}
}
In the example above (just copied from the GitHub URL you've posted), there will be 10 connections in the pool ready to use. Here's the beauty: the pool manager will handle all connection use and re-use for you, i.e., the number of connections are elastic based on your app's needs.
As you've mentioned, what about DB crashes? That too is built-in: connection health-check:
Internally, each Connection instance is a separate pool of TDS
connections. Once you create a new Request/Transaction/Prepared
Statement, a new TDS connection is acquired from the pool and reserved
for desired action. Once the action is complete, connection is
released back to the pool. Connection health check is built-in so once
the dead connection is discovered, it is immediately replaced with a
new one.
I hope this helps!

MEANJS: Security in SocketIO

Situation
I'm using the library SocketIO in my MEAN.JS application.
in NodeJS server controller:
var socketio = req.app.get('socketio');
socketio.sockets.emit('article.created.'+req.user._id, data);
in AngularJS client controller:
//Creating listener
Socket.on('article.created.'+Authentication.user._id, callback);
//Destroy Listener
$scope.$on('$destroy',function(){
Socket.removeListener('article.created.'+Authentication.user._id, callback);
});
Okey. Works well...
Problem
If a person (hacker or another) get the id of the user, he can create in another application a listener in the same channel and he can watch all the data that is sends to the user; for example all the notificacions...
How can I do the same thing but with more security?
Thanks!
Some time ago I stumbled upon the very same issue. Here's my solution (with minor modifications - used in production).
We will use Socket.IO namespaces to create private room for each user. Then we can emit messages (server-side) to specific rooms. In our case - only so specific user can receive them.
But to create private room for each connected user, we have to verify their identify first. We'll use simple piece of authentication middleware for that, supported by Socket.IO since its 1.0 release.
1. Authentication middleware
Since its 1.0 release, Socket.IO supports middleware. We'll use it to:
Verify connecting user identify, using JSON Web Token (see jwt-simple) he sent us as query parameter. (Note that this is just an example, there are many other ways to do this.)
Save his user id (read from the token) within socket.io connection instance, for later usage (in step 2).
Server-side code example:
var io = socketio.listen(server); // initialize the listener
io.use(function(socket, next) {
var handshake = socket.request;
var decoded;
try {
decoded = jwt.decode(handshake.query().accessToken, tokenSecret);
} catch (err) {
console.error(err);
next(new Error('Invalid token!'));
}
if (decoded) {
// everything went fine - save userId as property of given connection instance
socket.userId = decoded.userId; // save user id we just got from the token, to be used later
next();
} else {
// invalid token - terminate the connection
next(new Error('Invalid token!'));
}
});
Here's example on how to provide token when initializing the connection, client-side:
socket = io("http://stackoverflow.com/", {
query: 'accessToken=' + accessToken
});
2. Namespacing
Socket.io namespaces provide us with ability to create private room for each connected user. Then we can emit messages into specific room (so only users within it will receive them, as opposed to every connected client).
In previous step we made sure that:
Only authenticated users can connect to our Socket.IO interface.
For each connected client, we saved user id as property of socket.io connection instance (socket.userId).
All that's left to do is joining proper room upon each connection, with name equal to user id of freshly connected client.
io.on('connection', function(socket){
socket.join(socket.userId); // "userId" saved during authentication
// ...
});
Now, we can emit targeted messages that only this user will receive:
io.in(req.user._id).emit('article.created', data); // we can safely drop req.user._id from event name itself

Redis - Pub/Sub Internals

I've written a small Socket.IO server, which works fine, I can connect to it, I can send/receive messages, so everything is working ok. Just the relevant part of the code is presented here:
var RedisStore = require('socket.io/lib/stores/redis');
const pub = redis.createClient('127.0.0.1', 6379);
const sub = redis.createClient('127.0.0.1', 6379);
const store = redis.createClient('127.0.0.1', 6379);
io.configure(function() {
io.set('store', new RedisStore({
redisPub : pub,
redisSub : sub,
redisClient : store
}));
});
io.sockets.on('connection', function(socket) {
socket.on('message', function(msg) {
pub.publish("lobby", msg);
});
/*
* Subscribe to the lobby and receive messages.
*/
var sub = redis.createClient('127.0.0.1', 6379);
sub.subscribe("lobby");
sub.on('message', function(channel, msg) {
socket.send(msg);
});
});
Here, I'm interested in the problem where certain client is subscribed to a different room, which is why I'm also using the sub Redis variable inside each socket connection: because each client can be subscribed to a different room and can receive messages from there. I'm not entirely sure whether the code above is ok, so please let me know if I need to do anything else than define the sub Redis connection inside the Socket.IO connection: this also means that a new Redis connection is spawned for each client connecting serving his messages from the subsribed room? I guess this is quite an overhead, so I would like to solve it anyway possible?
Thank you
Both node.js and redis are very good at handling lots of connections (thousands is no problem), so what you're doing is fine.
As a side note, you will want to look into upping your file descriptor limits if you do intend on supporting thousands of connections.

Resources