I make use of this specific version: https://github.com/patriksimek/node-mssql/tree/v3.3.0#multiple-connections of the SQL Server npm package.
I have been looking through the documentation of tedious (the underlying lib) and Microsofts documentation (see the github link above).
I couldn't find anything that does something simple like getCurrentConnection, or getConnectionStatus or anything similar.
I had two ways to solve this problem but I'm not happy with both of them so that's why I'm asking here.
My first approach was to set a timeout and let the connect function call itself on each catch(err).
The second one was to handle this in the middleware but then if all is working fine it will make a connection to SQL on every request and closing that connection again.
My middleware function:
api.use(function(err, req, res, next){
sql.close();
sql.connect(config.database).then(() => {
next();
}).catch(function(err) {
sql.close();
server.main();
});
});
I want to, if possible pick up the connection instead of closing and starting a new one with regards to when the server or the database crashes I still have some data from the existing function.
By the help of Arnold I got to understand the mssql package and it's inner workings a lot better.
Therefor I came up with the following solution to my problem.
let intervalFunction;
const INTERVAL_DURATION = 4000;
if (require.main === module){
console.log("Listening on http://localhost:" + config.port + " ...");
app.listen(config.port);
// try to connect to db and fire main on succes.
intervalFunction = setInterval(()=> getConnection(), INTERVAL_DURATION);
}
function getConnection() {
sql.close();
sql.connect(config.database).then(() => {
sql.close();
clearInterval(intervalFunction);
main();
}).catch(function(err) {
console.error(err);
console.log(`DB connection will be tried again in ${INTERVAL_DURATION}ms`)
sql.close();
});
}
Once the initial connection has been made but it got lost in the meantime the pool will pick up the connection automatically and handle your connections
If I understood you correctly, you basically want to reuse connections. Tedious has built-in connection pooling, so you don't have to worry about re-using them:
var config = {
user: '...',
password: '...',
server: 'localhost',
database: '...',
pool: {
max: 10,
min: 0,
idleTimeoutMillis: 30000
}
}
In the example above (just copied from the GitHub URL you've posted), there will be 10 connections in the pool ready to use. Here's the beauty: the pool manager will handle all connection use and re-use for you, i.e., the number of connections are elastic based on your app's needs.
As you've mentioned, what about DB crashes? That too is built-in: connection health-check:
Internally, each Connection instance is a separate pool of TDS
connections. Once you create a new Request/Transaction/Prepared
Statement, a new TDS connection is acquired from the pool and reserved
for desired action. Once the action is complete, connection is
released back to the pool. Connection health check is built-in so once
the dead connection is discovered, it is immediately replaced with a
new one.
I hope this helps!
Related
TLDR: Is there a race condition issue with passportjs or passport-ldapauth?
I am using the koa-passport library with the passport-ldapauth strategy in a nodejs application intended to authenticate a user against AD (Active Directory). Wow, that was a mouthful.
Here is the error I am getting back from passport.authenticate which I'm assuming is coming back from LDAP:
BusyError: 00002024: LdapErr: DSID-0C060810, comment: No other operations may be performed on the connection while a bind is outstanding.
The problem here is obvious, there is an outstanding bind, and it must be closed before I can make another bind to authenticate the next user. The solution however is not, it may either lie with LDAP or it may lie with passportjs. I'm here in hopes to find a solution for the latter. (Going to explore config options for LDAP while waiting for a response on this one #multiprocessing)
Here is my code:
import passport from 'koa-passport';
import LdapStrategy from 'passport-ldapauth';
import { readFileSync } from 'fs';
const ldapCert = readFileSync(process.env.LDAP_CERT, 'utf8');
const ldapConfig = {
server: {
url: process.env.LDAP_URL,
bindDN: process.env.LDAP_BINDDN,
bindCredentials: process.env.LDAP_PASSWORD,
searchBase: process.env.LDAP_SEARCH_BASE,
searchFilter: process.env.LDAP_SEARCH_FILTER,
searchAttributes: ['sAMAccountName'],
tlsOptions: {
ca: [ldapCert]
}
}
};
module.exports = async (ctx, next) => {
passport.initialize();
passport.use(new LdapStrategy(ldapConfig));
await passport.authenticate('ldapauth', { session: false }, async (err, user, info) => {
if (err) {
console.log('Invalid Authentication Error');
ctx.throw('INVALID_AUTHENTICATION');
} else if (!user) {
console.log('Invalid username or password Error');
ctx.throw('INVALID_USERNAME_PASSWORD');
} else {
await next(); // continue to authorization flow
}
})(ctx, next);
Before we get started, know that all the ldapConfigs remain the same throughout the life of the application, so that means I am using the same BINDDN and PASSWORD for every lookup.
So as stated in the title, this error happens intermittently. So the code itself works in general, and I'm able to authenticate users about 95% of the time, and if it ever throws the INVALID_AUTHENTICATION error when the password was correct, that is when I'm getting the the BusyError in the logs.
This problem is more prominent and easier to reproduce when I type in a bogus username/password, ideally I should be prompted with the INVALID_USERNAME_PASSWORD error, which I am about 75% of the time. The other 25% I get INVALID_AUTHENTICATION.
I've even tried to reproduce it using ldapsearch command tool, paired with tmux. I ran a call in ~20 panes simultaneously using the same binddn and they all came back just fine (should I try to run it with more? 100? 1000?). This is what led me to believe the issues was not with LDAP or AD, but more so passportjs.
I concluded with maybe there's a race condition issue with passportJS, but I couldn't find any literature on the interwebs. Has anyone ever encountered something like this? I believe that maybe the bind isn't closing because sometimes passport.authenticate might return before the callback is called? Is that even possible? Does it have something to do with how I coded it with async/await?
My fallback might be to ditch passportjs entirely and just try with ldapjs. Any thoughts, comments, suggestions, discussions will be appreciated
Here is the full stack trace if needed:
BusyError: 00002024: LdapErr: DSID-0C060810, comment: No other operations may be performed on the connection while a bind is outstanding., data 0, v3839
at messageCallback (/app/node_modules/ldapjs/lib/client/client.js:1419:45)
at Parser.onMessage (/app/node_modules/ldapjs/lib/client/client.js:1089:14)
at emitOne (events.js:116:13)
at Parser.emit (events.js:211:7)
at Parser.write (/app/node_modules/ldapjs/lib/messages/parser.js:111:8)
at TLSSocket.onData (/app/node_modules/ldapjs/lib/client/client.js:1076:22)
at emitOne (events.js:116:13)
at TLSSocket.emit (events.js:211:7)
at addChunk (_stream_readable.js:263:12)
at readableAddChunk (_stream_readable.js:250:11)
at TLSSocket.Readable.push (_stream_readable.js:208:10)
at TLSWrap.onread (net.js:597:20)
InternalServerError: INVALID_AUTHENTICATION
at Object.throw (/app/node_modules/koa/lib/context.js:97:11)
I've a TCP server which I need to modify, to accept only requests from predefined IPs. My idea was to create an array, containing all IPs which are allowed, but how to do the check and how to put this check around my existing code?
code:
// Load the TCP Library
var net = require('net')
// Start a TCP Server
net.createServer(function (socket) {
socket.setKeepAlive(true)
// TODO: Add mysql connection list entry
console.log('connected', socket.remoteAddress)
socket.on('close', function(err){
if (err) throw err;
// TODO: Add mysql connection list entry
console.log('disconnected', socket.remoteAddress)
})
}).listen(5000);
// Put a friendly message on the terminal of the server.
console.log("Server running at port 5000");
I think this is the wrong tool for the job. You should configure access to the application using the system firewall. Firewalls allow you to:
select ip ranges in a flexible manner
e.g blocking as well as allowing
work with different ip versions
work with different protocols
better integrate into IT infrastructure
However, if you don't have access to the firewall and you need something quick and dirty you could easily kick connections that are not in your list by checking the ip address against a list:
var allow_list = ['10.1.1.1', '10.1.1.2'];
var net = require('net')
net.createServer(function (socket) {
if (allow_list.indexOf(socket.remoteAddress) < 0) {
socket.destroy();
return;
}
socket.setKeepAlive(true)
// do stuff
}).listen(5000);
console.log("Server running at port 5000");
Situation
I'm using the library SocketIO in my MEAN.JS application.
in NodeJS server controller:
var socketio = req.app.get('socketio');
socketio.sockets.emit('article.created.'+req.user._id, data);
in AngularJS client controller:
//Creating listener
Socket.on('article.created.'+Authentication.user._id, callback);
//Destroy Listener
$scope.$on('$destroy',function(){
Socket.removeListener('article.created.'+Authentication.user._id, callback);
});
Okey. Works well...
Problem
If a person (hacker or another) get the id of the user, he can create in another application a listener in the same channel and he can watch all the data that is sends to the user; for example all the notificacions...
How can I do the same thing but with more security?
Thanks!
Some time ago I stumbled upon the very same issue. Here's my solution (with minor modifications - used in production).
We will use Socket.IO namespaces to create private room for each user. Then we can emit messages (server-side) to specific rooms. In our case - only so specific user can receive them.
But to create private room for each connected user, we have to verify their identify first. We'll use simple piece of authentication middleware for that, supported by Socket.IO since its 1.0 release.
1. Authentication middleware
Since its 1.0 release, Socket.IO supports middleware. We'll use it to:
Verify connecting user identify, using JSON Web Token (see jwt-simple) he sent us as query parameter. (Note that this is just an example, there are many other ways to do this.)
Save his user id (read from the token) within socket.io connection instance, for later usage (in step 2).
Server-side code example:
var io = socketio.listen(server); // initialize the listener
io.use(function(socket, next) {
var handshake = socket.request;
var decoded;
try {
decoded = jwt.decode(handshake.query().accessToken, tokenSecret);
} catch (err) {
console.error(err);
next(new Error('Invalid token!'));
}
if (decoded) {
// everything went fine - save userId as property of given connection instance
socket.userId = decoded.userId; // save user id we just got from the token, to be used later
next();
} else {
// invalid token - terminate the connection
next(new Error('Invalid token!'));
}
});
Here's example on how to provide token when initializing the connection, client-side:
socket = io("http://stackoverflow.com/", {
query: 'accessToken=' + accessToken
});
2. Namespacing
Socket.io namespaces provide us with ability to create private room for each connected user. Then we can emit messages into specific room (so only users within it will receive them, as opposed to every connected client).
In previous step we made sure that:
Only authenticated users can connect to our Socket.IO interface.
For each connected client, we saved user id as property of socket.io connection instance (socket.userId).
All that's left to do is joining proper room upon each connection, with name equal to user id of freshly connected client.
io.on('connection', function(socket){
socket.join(socket.userId); // "userId" saved during authentication
// ...
});
Now, we can emit targeted messages that only this user will receive:
io.in(req.user._id).emit('article.created', data); // we can safely drop req.user._id from event name itself
I have some sample code that is successfully connecting to SQL Server using Microsoft SQL Server user name and password. But I was wondering if there is a way to use integrated security with this script. Basically which means use the logged in user's credentials without supplying a password in the script.
var sql = require('mssql');
var config = {
server: '127.0.0.1',
database: 'master',
user: 'xx',
password: 'xxx',
options : {
trustedConnection : true
}
}
var connection = new sql.Connection(config, function(err) {
// ... error checks
if(err) {
return console.log("Could not connect to sql: ", err);
}
// Query
var request = new sql.Request(connection);
request.query('select * from dbo.spt_monitor (nolock)', function(err, recordset) {
// ... error checks
console.dir(recordset);
});
// Stored Procedure
});
Wish I could add this as a comment but don't have enough reputation yet... but what happens when you run this without providing a username/password in the config object?
Windows Authentication happens at the login level so there is no need to provide it at the application level.
Just browsed the documentation and looks like you cannot provide a raw connection string to connect, but to connect you want to build something that looks like this:
var connectionString= 'Server=MyServer;Database=MyDb;Trusted_Connection=Yes;'
The source code of the mssql module is here: https://github.com/patriksimek/node-mssql/blob/master/src/msnodesql.coffee... maybe you can fork and do a pull request that would provide an optional flag whether to use Windows Authentication or not, and that flag would remove the Uid={#{user}};Pwd={#{password}} (as it's unneeded for Windows Authentication) from the CONNECTION_STRING_PORT variable in the module's source code.
I've written a small Socket.IO server, which works fine, I can connect to it, I can send/receive messages, so everything is working ok. Just the relevant part of the code is presented here:
var RedisStore = require('socket.io/lib/stores/redis');
const pub = redis.createClient('127.0.0.1', 6379);
const sub = redis.createClient('127.0.0.1', 6379);
const store = redis.createClient('127.0.0.1', 6379);
io.configure(function() {
io.set('store', new RedisStore({
redisPub : pub,
redisSub : sub,
redisClient : store
}));
});
io.sockets.on('connection', function(socket) {
socket.on('message', function(msg) {
pub.publish("lobby", msg);
});
/*
* Subscribe to the lobby and receive messages.
*/
var sub = redis.createClient('127.0.0.1', 6379);
sub.subscribe("lobby");
sub.on('message', function(channel, msg) {
socket.send(msg);
});
});
Here, I'm interested in the problem where certain client is subscribed to a different room, which is why I'm also using the sub Redis variable inside each socket connection: because each client can be subscribed to a different room and can receive messages from there. I'm not entirely sure whether the code above is ok, so please let me know if I need to do anything else than define the sub Redis connection inside the Socket.IO connection: this also means that a new Redis connection is spawned for each client connecting serving his messages from the subsribed room? I guess this is quite an overhead, so I would like to solve it anyway possible?
Thank you
Both node.js and redis are very good at handling lots of connections (thousands is no problem), so what you're doing is fine.
As a side note, you will want to look into upping your file descriptor limits if you do intend on supporting thousands of connections.