I've written a small Socket.IO server, which works fine, I can connect to it, I can send/receive messages, so everything is working ok. Just the relevant part of the code is presented here:
var RedisStore = require('socket.io/lib/stores/redis');
const pub = redis.createClient('127.0.0.1', 6379);
const sub = redis.createClient('127.0.0.1', 6379);
const store = redis.createClient('127.0.0.1', 6379);
io.configure(function() {
io.set('store', new RedisStore({
redisPub : pub,
redisSub : sub,
redisClient : store
}));
});
io.sockets.on('connection', function(socket) {
socket.on('message', function(msg) {
pub.publish("lobby", msg);
});
/*
* Subscribe to the lobby and receive messages.
*/
var sub = redis.createClient('127.0.0.1', 6379);
sub.subscribe("lobby");
sub.on('message', function(channel, msg) {
socket.send(msg);
});
});
Here, I'm interested in the problem where certain client is subscribed to a different room, which is why I'm also using the sub Redis variable inside each socket connection: because each client can be subscribed to a different room and can receive messages from there. I'm not entirely sure whether the code above is ok, so please let me know if I need to do anything else than define the sub Redis connection inside the Socket.IO connection: this also means that a new Redis connection is spawned for each client connecting serving his messages from the subsribed room? I guess this is quite an overhead, so I would like to solve it anyway possible?
Thank you
Both node.js and redis are very good at handling lots of connections (thousands is no problem), so what you're doing is fine.
As a side note, you will want to look into upping your file descriptor limits if you do intend on supporting thousands of connections.
Related
I'm trying to build a private messaging app using socket.io. My assumption was that per session there would only ever be one socket.id. If i keep track of my socket.id and console.log it, it's constantly chnaging and I dont know why.
Here a snippet if how I am saving a new user.
//Listen on the connection for incoming sockets
io.on('connection',function (socket) {
console.log(socket.id)
//Add the users to the socket session
socket.on("add-user", function(username){
oCurrentUser = {
username: username,
id: socket.id
};
console.log("socket.id for this user is", oCurrentUser.id);
aClients.push(oCurrentUser);
console.log(aClients);
socket.emit('login',oCurrentUser);
});
});
Response after I create a users:
listening on *:3000
Vws9v-Wegjx4bvBKAAAA
e611mmTgdYmFvhuMAAAB
IPE95tFpgem0eyvyAAAC
m5YLVR0PE_Qqc-AcAAAD
GXbyRVYAnHgBz4VzAAAE
When I do console.log(socket.id) is get like 5 ID's. I would of assumed it would return just one right?
I make use of this specific version: https://github.com/patriksimek/node-mssql/tree/v3.3.0#multiple-connections of the SQL Server npm package.
I have been looking through the documentation of tedious (the underlying lib) and Microsofts documentation (see the github link above).
I couldn't find anything that does something simple like getCurrentConnection, or getConnectionStatus or anything similar.
I had two ways to solve this problem but I'm not happy with both of them so that's why I'm asking here.
My first approach was to set a timeout and let the connect function call itself on each catch(err).
The second one was to handle this in the middleware but then if all is working fine it will make a connection to SQL on every request and closing that connection again.
My middleware function:
api.use(function(err, req, res, next){
sql.close();
sql.connect(config.database).then(() => {
next();
}).catch(function(err) {
sql.close();
server.main();
});
});
I want to, if possible pick up the connection instead of closing and starting a new one with regards to when the server or the database crashes I still have some data from the existing function.
By the help of Arnold I got to understand the mssql package and it's inner workings a lot better.
Therefor I came up with the following solution to my problem.
let intervalFunction;
const INTERVAL_DURATION = 4000;
if (require.main === module){
console.log("Listening on http://localhost:" + config.port + " ...");
app.listen(config.port);
// try to connect to db and fire main on succes.
intervalFunction = setInterval(()=> getConnection(), INTERVAL_DURATION);
}
function getConnection() {
sql.close();
sql.connect(config.database).then(() => {
sql.close();
clearInterval(intervalFunction);
main();
}).catch(function(err) {
console.error(err);
console.log(`DB connection will be tried again in ${INTERVAL_DURATION}ms`)
sql.close();
});
}
Once the initial connection has been made but it got lost in the meantime the pool will pick up the connection automatically and handle your connections
If I understood you correctly, you basically want to reuse connections. Tedious has built-in connection pooling, so you don't have to worry about re-using them:
var config = {
user: '...',
password: '...',
server: 'localhost',
database: '...',
pool: {
max: 10,
min: 0,
idleTimeoutMillis: 30000
}
}
In the example above (just copied from the GitHub URL you've posted), there will be 10 connections in the pool ready to use. Here's the beauty: the pool manager will handle all connection use and re-use for you, i.e., the number of connections are elastic based on your app's needs.
As you've mentioned, what about DB crashes? That too is built-in: connection health-check:
Internally, each Connection instance is a separate pool of TDS
connections. Once you create a new Request/Transaction/Prepared
Statement, a new TDS connection is acquired from the pool and reserved
for desired action. Once the action is complete, connection is
released back to the pool. Connection health check is built-in so once
the dead connection is discovered, it is immediately replaced with a
new one.
I hope this helps!
Situation
I'm using the library SocketIO in my MEAN.JS application.
in NodeJS server controller:
var socketio = req.app.get('socketio');
socketio.sockets.emit('article.created.'+req.user._id, data);
in AngularJS client controller:
//Creating listener
Socket.on('article.created.'+Authentication.user._id, callback);
//Destroy Listener
$scope.$on('$destroy',function(){
Socket.removeListener('article.created.'+Authentication.user._id, callback);
});
Okey. Works well...
Problem
If a person (hacker or another) get the id of the user, he can create in another application a listener in the same channel and he can watch all the data that is sends to the user; for example all the notificacions...
How can I do the same thing but with more security?
Thanks!
Some time ago I stumbled upon the very same issue. Here's my solution (with minor modifications - used in production).
We will use Socket.IO namespaces to create private room for each user. Then we can emit messages (server-side) to specific rooms. In our case - only so specific user can receive them.
But to create private room for each connected user, we have to verify their identify first. We'll use simple piece of authentication middleware for that, supported by Socket.IO since its 1.0 release.
1. Authentication middleware
Since its 1.0 release, Socket.IO supports middleware. We'll use it to:
Verify connecting user identify, using JSON Web Token (see jwt-simple) he sent us as query parameter. (Note that this is just an example, there are many other ways to do this.)
Save his user id (read from the token) within socket.io connection instance, for later usage (in step 2).
Server-side code example:
var io = socketio.listen(server); // initialize the listener
io.use(function(socket, next) {
var handshake = socket.request;
var decoded;
try {
decoded = jwt.decode(handshake.query().accessToken, tokenSecret);
} catch (err) {
console.error(err);
next(new Error('Invalid token!'));
}
if (decoded) {
// everything went fine - save userId as property of given connection instance
socket.userId = decoded.userId; // save user id we just got from the token, to be used later
next();
} else {
// invalid token - terminate the connection
next(new Error('Invalid token!'));
}
});
Here's example on how to provide token when initializing the connection, client-side:
socket = io("http://stackoverflow.com/", {
query: 'accessToken=' + accessToken
});
2. Namespacing
Socket.io namespaces provide us with ability to create private room for each connected user. Then we can emit messages into specific room (so only users within it will receive them, as opposed to every connected client).
In previous step we made sure that:
Only authenticated users can connect to our Socket.IO interface.
For each connected client, we saved user id as property of socket.io connection instance (socket.userId).
All that's left to do is joining proper room upon each connection, with name equal to user id of freshly connected client.
io.on('connection', function(socket){
socket.join(socket.userId); // "userId" saved during authentication
// ...
});
Now, we can emit targeted messages that only this user will receive:
io.in(req.user._id).emit('article.created', data); // we can safely drop req.user._id from event name itself
I'm working on an angular/node app where people can have many 1:1 chats with other users (like Whatsapp without groups) using socket.io and btford's angular-socket module (https://github.com/btford/angular-socket-io). Right now A) a client joins a socket.io room using emit. The client code is:
mySocket.emit('joinroom', room);
Server code is:
socket.on('joinroom', function (room){
socket.join(room);
});
B) chat messages are sent to server via emit. Client code is
mySocket.emit('sendmsg', data, function(data){
console.log(data);
});
and C) the server should send messages to others in the room via broadcast. Server code is:
socket.on('sendmsg', function (text, room, sender, recipient, timestamp) {
// Some code here to save message to database before broadcasting to other users
console.log('This works');
socket.broadcast.to(room).emit('relaymsg', msg);
});
Client code is
$scope.$on('socket:relaymsg', function(event, data) {
console.log('This only sometimes works');
// do stuff to show that message was received
});
A and B seem to work fine, but C seems to be very unreliable. The server code seems to be ok, but the client does not seem to receive the message. Sometimes it works, and sometimes it does not. ie 'This works' always shows up, but 'This only sometimes works' does not always show up.
1) Any thoughts on what could be causing this issue? Are there any errors in my code?
2) Is broadcast and rooms the right way to be setting this up if there are many users, all of which can have multiple 1:1 chats with other users?
In case it helps, this is the factory code for the angular-socket module
.factory('mySocket', function (socketFactory, server) {
var socket = socketFactory({
ioSocket: io.connect(server)
});
socket.forward('relaymsg');
return socket;
});
Appreciate any help you can provide!! Thanks in advance!
Thanks everyone for the comments, I believe I found the main issues. There were two things I think causing problems:
1) The bigger issue I think is that I'm use node clusters, and as a result users might join rooms on different workers and not be able to communicate with each other. I've ended up adding sticky sessions and Redis per the instructions here: http://socket.io/docs/using-multiple-nodes/
Sticky sessions is pretty useful, just as an FYI since the docs don't mention it, the module automatically creates workers and re-spawns them if killed
I couldn't find a ton of examples of how to implement sticky+redis since socket.io 1.0 is relatively new and seems to deal with Redis differently from prior versions, but these were very helpful:
https://github.com/Automattic/socket.io-redis/issues/31
https://github.com/evilstudios/chat-example-cluster/blob/master/index.js
2) Every time the user closed their phone it would disconnect them from the chat room, even if the chat room was the last screen open on the phone
Hope that helps people in the future!
I'm building a closed app (users need to authenticate in order to use it). I'm having trouble in identifying the currently authenticated user from my Latchet session. Since apache does not support long-lived connections, I host Latchet on a separate server instance. This means that my users receive two session_id's. One for each connection. I want to be able to identify the current user for both connections.
My client code is a SPA based on AngularJS. For client WS, I'm using the Autobahn.ws WAMP v1 implementation. The ab framework specifies methods for authentication: http://autobahn.ws/js/reference_wampv1.html#session-authentication, but how exactly do I go about doing this?
Do I save the username and password on the client and retransmit these once login is performed (which by the way is separate from the rest of my SPA)? If so, won't this be a security concearn?
And what will receive the auth request server side? I cannot find any examples of this...
Please help?
P.S. I do not have reputation enough to create the tag "Latchet", so I'm using Ratchet (which Latchet is built on) instead.
Create an angularjs service called AuthenticationService, inject where needed and call it with:
AuthenticationService.check('login_name', 'password');
This code exists in a file called authentication.js. It assumes that autobahn is already included. I did have to edit this code heavily removing all the extra crap I had in it,it may have a syntax error or two, but the idea is there.
angular.module(
'top.authentication',
['top']
)
.factory('AuthenticationService', [ '$rootScope', function($rootScope) {
return {
check: function(aname, apwd) {
console.log("here in the check function");
$rootScope.loginInfo = { channel: aname, secret: apwd };
var wsuri = 'wss://' + '192.168.1.11' + ':9000/';
$rootScope.loginInfo.wsuri = wsuri;
ab.connect(wsuri,
function(session) {
$rootScope.loginInfo.session = session;
console.log("connected to " + wsuri);
onConnect(session);
},
function(code,reason) {
$rootScope.loginInfo.session = null;
if ( code == ab.CONNECTION_UNSUPPORTED) {
console.log(reason);
} else {
console.log('failed');
$rootScope.isLoggedIn = 'false';
}
}
);
function onConnect(sess) {
console.log('onConnect');
var wi = $rootScope.loginInfo;
sess.authreq(wi.channel).then(
function(challenge) {
console.log("onConnect().then()");
var secret = ab.deriveKey(wi.secret,JSON.parse(challenge).authextra);
var signature = sess.authsign(challenge, secret);
sess.auth(signature).then(onAuth, ab.log);
},ab.log
);
}
function onAuth(permission) {
$rootScope.isLoggedIn = 'true';
console.log("authentication complete");
// do whatever you need when you are logged in..
}
}
};
}])
then you need code (as you point out) on the server side. I assume your server side web socket is php coding. I can't help with that, haven't coded in php for over a year. In my case, I use python, I include the autobahn gear, then subclass WampCraServerProtocol, and replace a few of the methods (onSessionOpen, getAuthPermissions, getAuthSecret, onAuthenticated and onClose) As you can envision, these are the 'other side' of the angular code knocking at the door. I don't think autobahn supports php, so, you will have to program the server side of the authentication yourself.
Anyway, my backend works much more like what #oberstat describes. I establish authentication via old school https, create a session cookie, then do an ajax requesting a 'ticket' (which is a temporary name/password which i associate with the web authenticated session). It is a one use name/password and must be used in a few seconds or it disappears. The point being I don't have to keep the user's credentials around, i already have the cookie/session which i can create tickets that can be used. this has a neat side affect as well, my ajax session becomes related to my web socket session, a query on either is attributed to the same session in the backend.
-g
I can give you a couple of hints regarding WAMP-CRA, which is the authentication mechnism this is referring:
WAMP-CRA does not send passwords over the wire. It works by a challenge-response scheme. The client and server have a shared secret. To authenticate a client, the server will send a challenge (something random) that the client needs to sign - using the secret. And only the signature is sent back. The client might store the secret in browser local storage. It's never sent.
In a variant of above, the signing of the challenge the server sends is not directly signed within the client, but the client might let the signature be created from an Ajax request. This is useful when the client was authenticated using other means already (e.g. classical cookie based), and the signing can then be done in the classical web app that was authenticating.
Ok, Greg was kind enough to provide a full example of the client implementation on this, so I wont do anything more on that. It works with just a few tweaks and modifications to almost any use-case I can think of. I will mark his answer as the correct one. But his input only covered the theory of the backend implementation, so I will try to fill in the blanks here for postparity.
I have to point out though, that the solution here is not complete as it does not give me a shared session between my SPA/REST connection and my WS connection.
I discovered that the authentication request transmitted by autobahn is in fact a variant of RPC and for some reason has hardcoded topic names curiously resembling regular url's:
- 'http://api.wamp.ws/procedure#authreq' - for auth requests
- 'http://api.wamp.ws/procedure#auth' - for signed auth client responses
I needed to create two more routes in my Laravel routes.php
// WS CRA routes
Latchet::topic('http://api.wamp.ws/procedure#authreq', 'app\\socket\\AuthReqController');
Latchet::topic('http://api.wamp.ws/procedure#auth', 'app\\socket\\AuthReqController');
Now a Latchet controller has 4 methods: subscribe, publish, call and unsubscribe. Since both the authreq and the auth calls made by autobahn are RPC calls, they are handled by the call method on the controller.
The solution first proposed by oberstet and then backed up by Greg, describes a temporary auth key and secret being generated upon request and held temporarily just long enough to be validated by the WS CRA procedure. I've therefore created a REST endpoint which generates a persisted key value pair. The endpoint is not included here, as I am sure that this is trivial.
class AuthReqController extends BaseTopic {
public function subscribe ($connection, $topic) { }
public function publish ($connection, $topic, $message, array $exclude, array $eligible) { }
public function unsubscribe ($connection, $topic) { }
public function call ($connection, $id, $topic, array $params) {
switch ($topic) {
case 'http://api.wamp.ws/procedure#authreq':
return $this->getAuthenticationRequest($connection, $id, $topic, $params);
case 'http://api.wamp.ws/procedure#auth':
return $this->processAuthSignature($connection, $id, $topic, $params);
}
}
/**
* Process the authentication request
*/
private function getAuthenticationRequest ($connection, $id, $topic, $params) {
$auth_key = $params[0]; // A generated temporary auth key
$tmpUser = $this->getTempUser($auth_key); // Get the key value pair as persisted from the temporary store.
if ($tmpUser) {
$info = [
'authkey' => $tmpUser->username,
'secret' => $tmpUser->secret,
'timestamp' => time()
];
$connection->callResult($id, $info);
} else {
$connection->callError($id, $topic, array('User not found'));
}
return true;
}
/**
* Process the final step in the authentication
*/
private function processAuthSignature ($connection, $id, $topic, $params) {
// This should do something smart to validate this response.
// The session should be ours right now. So store the Auth::user()
$connection->user = Auth::user(); // A null object is stored.
$connection->callResult($id, array('msg' => 'connected'));
}
private function getTempUser($auth_key) {
return TempAuth::findOrFail($auth_key);
}
}
Now somewhere in here I've gone wrong. Cause if I were supposed to inherit the ajax session my app holds, I would be able to call Auth::user() from any of my other WS Latchet based controllers and automatically be presented with the currently logged in user. But this is not the case. So if somebody see what I'm doing wrong, give me a shout. Please!
Since I'm unable to get the shared session, I'm currently cheating by transmitting the real username as a RPC call instead of performing a full CRA.