Why are multiple connections being created in my database? - database

I am developing a website in NextJs and using MongoDB as a database. This code is what connects to the database and keeps it in cache.
import { MongoClient } from "mongodb";
let cache = {};
export default async function connect() {
if (cache?.client?.isConnected()) {
return cache;
}
const opts = {
useNewUrlParser: true,
useUnifiedTopology: true,
};
return MongoClient.connect(process.env.DATABASE_URL, opts).then((client) => {
cache = {
db: client.db("bd"),
client,
};
return {
client,
db: client.db("bd"),
};
});
}
I imagine that this code should cause only 1 connection to be created but it is creating many more connections. As in the photo below, 18 connections were created and dropped to 6 after I stopped use the site. Why are several connections being created? How do you make it just one?
This is an example of code that I am using on one of the routes to list users.
...
const { db, client } = await connect();
const { userThatMakeRequest } = req;
const { group } = req.query;
const users = await db
.collection("user")
.find(
{
roles: "team-user",
"team.id": userThatMakeRequest?.team?.id,
"group.id": group,
hasAccess: true,
},
{ projection: { password: 0 } }
)
.toArray();
res.status(200).json({
response: users || [],
});

The driver creates 1 or 2 connections to each known server for monitoring purposes. Application connections (the ones used for satisfying queries and writes) are separate.
If you create a client object and perform one query against a 3 node replica set running MongoDB 4.4, you'll end up with 7 total connections.

Related

Why cache MongoDB connection in Next.js? And does it work?

I'm creating a Next.js application and I noticed that many developers cache the MongoDB connection. For example
let cachedClient = null;
let cachedDb = null;
export async function connectToDatabase() {
if (cachedClient && cachedDb) {
return {
client: cachedClient,
db: cachedDb,
};
}
const opts = {
useNewUrlParser: true,
useUnifiedTopology: true,
};
let client = new MongoClient(MONGODB_URI, opts);
await client.connect();
let db = client.db(MONGODB_DB);
cachedClient = client;
cachedDb = db;
return {
client: cachedClient,
db: cachedDb,
};
}
or
let cached = global.mongoose
if (!cached) {
cached = global.mongoose = { conn: null, promise: null }
}
async function dbConnect () {
if (cached.conn) {
return cached.conn
}
if (!cached.promise) {
const opts = {
useNewUrlParser: true,
useUnifiedTopology: true,
bufferCommands: false,
bufferMaxEntries: 0,
useFindAndModify: true,
useCreateIndex: true
}
cached.promise = mongoose.connect(MONGODB_URI, opts).then(mongoose => {
return mongoose
})
}
cached.conn = await cached.promise
return cached.conn
}
I've never seen that in Express apllications so I have 2 questions:
Why is caching database connection such a common thing in Next.js while I've never seen that in Express.js. What's he reason for that? How does it work? And is it worth it?
As you can see in the examples above some developer use useual let-variables while some other developer use global variables. What's the difference and which is the better solution?
In Next.Js you can cache some variables and mongo connection is one of them. It will significantly improve the response time of your application because the first call to your page will make all calls to mongo to estabilish a connection and it can take more than 2 seconds just to do this, after you have this connection stabilished you can reuse in future calls to that same page (in cases of local cache) and it'll dispense those 2 seconds spent on creating connection (resulting in a WAY faster response to your user).
eg:
1° Request to your page:
2300ms spent to get a response //Had to establish a new connection with mongo
2° Request to your page:
230ms spent to get a response //Used cached connection
3° Request to your page:
180ms spent to get a response //Used cached connection
4° Request to your page:
210ms spent to get a response //Used cached connection
...
The difference between the global cache and "let" cache is which functions will use the same connection. Depending in what your application does you can use global cache and it will prevent from every function in your app create your own connection with mongo and spent that 2 seconds I've mentioned.

SIP integration with call conference in JS

I am developing an Electron application with the integration of React.js as a front-end framework, which will be more like a calling application.
In that application-specific users can have multiple calls incoming, outgoing, mute | unmute calls, hold | unhold calls, etc.
For this functionality to be achieved we have our own sip server, and for integrating that SIP server, on the frontend we are using a library which is known as SIP.JS.
SIP.JS provides us mostly all the predefined functions to make a call, receive a call, mute, unmute, blind transfer, attended transfer, etc.
But when it comes to having a call conference, it doesn't have proper documentation for that.
SIP.JS specifies to us that we can use FreeSWITCH as well as ASTERISK in order to achieve the functionality, but with our specific requirements, no additional server needs to be integrated.
We have also referred to rfc documentation for the call conference, but no such progress was there.
So far what we did is:
Registered the userAgent
Code for Incoming call integrated
Code for outgoing calls integrated
multiple session handling is achieved, for multiple calls
mute | unmute, hold | unhold.
DTMF functionality
Blind Transfer, Attended Transfer
Ring all Devices
In this scenario of call conference, I guess we have to make changes in Incoming and outgoing session handling functions.
For registration and incoming call in context:
const getUAConfig = async (_extension, _name) => {
let alreadyLogin = '';
try {
alreadyLogin = 'yes';
if (alreadyLogin == 'yes') {
_displayname = _name;
_sipUsername = _extension;
_sipServer = 'SIP SERVER';
_sipPassword = 'SIP PASSWORD';
_wssServer = 'WSS SERVER;
const uri = UserAgent.makeURI('sip:' + _sipUsername + '#' + _sipServer);
const transportOptions = {
wsServers: 'WSS SERVER',
traceSip: true,
maxReconnectionAttempts: 1,
};
const userAgentOptions = {
uri: uri,
transportOptions: transportOptions,
userAgentString: 'App name',
authorizationPassword: _sipPassword,
sipExtension100rel: 'Supported',
sipExtensionReplaces: 'Supported',
register: true,
contactTransport: 'wss',
dtmfType: 'info',
displayName: _name,
sessionDescriptionHandlerFactoryOptions: {
peerConnectionOptions: {
rtcpMuxPolicy: 'negotiate',
iceCheckingTimeout: 1000,
iceTransportPolicy: 'all',
iceServers: [{ urls: 'stun:stun.l.google.com:19302' }],
},
},
};
userAgent = await new UserAgent(userAgentOptions);
const registerOptions = {
extraContactHeaderParams: [],
};
registerer = await new Registerer(userAgent, registerOptions);
registerer.stateChange.addListener((newState) => {
});
userAgent.start().then(async () => {
console.log('Connected with WebSocket.');
// Send REGISTER
await registerer
.register()
.then((request) => {
console.log('Successfully sent REGISTER, object is here');
dispatch({
type: USER_REGISTERED,
payload: true,
});
})
.catch((error) => {
console.log('Failed to send REGISTER');
});
});
return { userAgent, registerer };
} else {
return null;
}
} catch (error) {
console.log(error.message + '');
return null;
}
};
Outgoing functionality:
const dilaerFun = (inputNumber, userAgentInfo) => {
var session;
var uri = UserAgent.makeURI(
`URI which we wanna call (sip number)`
);
session = new Inviter(userAgentInfo, uri);
session
.invite()
.then((request) => {
console.log('Successfully sent INVITE');
sessionInfoAdd(session);
session.stateChange.addListener(async (state) => {
switch (state) {
case 'Established':
setMissedStatus(null);
console.log('established outgoing....');
//outgoing call log-----
const mediaElement = document.getElementById(
`mediaElement${session._id}`
);
const remoteStream = new MediaStream();
session.sessionDescriptionHandler.peerConnection
.getReceivers()
.forEach((receiver) => {
if (receiver.track) {
remoteStream.addTrack(receiver.track);
}
});
mediaElement.srcObject = remoteStream;
mediaElement.play();
break;
case 'Terminated':
console.log('terminated');
dispatch({
type: DEMO_STATE,
payload: session._id,
});
break;
default:
break;
}
});
})
.catch((error) => {
console.error(' Failed to INVITE');
console.error(error.toString());
});
};
Array of sessions are maintained by:
const sessionInfoAdd = (session) => {
dispatch({
type: SESSION_STORE,
payload: session,
});
};
Variable in which all sessions are stored is:
sessionInfo:[]
NOTE: getUAConfig() is called as soon as the application is started.
dialerFun() is called when we want to dial a specific number.
sessionInfoAdd() is called in both getUAConfig and dialerFun, as they are codes for incoming and outgoing calls.
when sessionInfoAdd() is triggered, the particular session which we get in return is added in the sessionInfo (Array) for the maintenance of sessions.
SIP.JS is just a library so you will have to get the conference setup on the FreeSWITCH or Asterisk (FreeSWITCH is the better in my opinion)
Doing this is fairly straight forward, at your app level you need a way to get calls across to the box after checking the details like access ID and any auth you want to add, (like a PIN.)
Once you have that done, you can forward that to an extension specifically set for conferencing or have a dynamic conference setup by send from the app towards a specific gateway/dialplan to do this.
The FreeSWITCH software has a steep learning curve on it but this helped me when I was doing something similar: https://freeswitch.org/confluence/display/FREESWITCH/mod_conference
You can also code you own conf if you wish.

SQL Server query using knex crashes Nuxt app when results size is too large

Using knex in a Nuxt app to query a SQL Server database hosted on Azure. When querying one particular table with ~150k rows, the app crashes but does print the length of the results returned. When querying a smaller table with ~2k rows, there is no problem.
Is there a limitation on how much data I can return from a single query? I need to be able to return about 1 million rows of data across several tables so that I can aggregate and display some calculations done with the raw table data.
I think it's understandable that the amount of data takes up too much memory, but I would like to know if there is any workaround to returning tons and tons of rows w/o issue.
api/routes/tickets.js
const { Router } = require('express');
const router = Router();
const knex_db = require('knex')({
client: 'mssql',
connection: {
host: 'mydb.database.windows.net',
user: 'user',
password: 'secret',
port: 1433,
options: {
database: 'mydatabase',
encrypt: true
}
}
});
router.get('/tickets/all', async function(req, res) {
const results = await knex_db('dbo.tickets');
console.log('results.length: ' + results.length);
res.json({data: results});
})
module.exports = router;
api/index.js
const express = require('express');
const app = express();
const tickets = require('./routes/tickets');
app.use(tickets);
module.exports = {
path: '/api',
handler: app
}
pages/setup/index.vue
<script>
export default {
async asyncData ({ $axios }) {
const data = (await $axios.$get('/api/tickets/all')).data;
// console.log(data);
return { tickets: data }
}
}
</script>
I was able to resolve this issue by changing my code from
res.json(...)
to
res.status(200).json(...)
For some reason res.json must have been causing a memory leak or something of the sorts.

MongoDB Problems in Heroku

I put this code in Heroku but for some reason is not working.
This is my code:
Client.on('ready', async () => {
await connect(config.MongoPath, {
useNewUrlParser: true,
useUnifiedTopology: true,
});
console.log("Ready!")
})
This is my schema:
const { Schema, model } = require('mongoose');
const PendingList = Schema({
id: String,
PendingList: {
default: [],
type: Array
}
});
module.exports = model('PendingList',PendingList);
I am receiving this error in Heroku.
It works perfectly on my local machine but not Heroku.
That's because your heroku app does not have the permissions to access your database cluster.
You need to go to your Mongo Atlas cluster Log in here and then whitelist heroku's IP inorder for the servers to access your DB
navigate to security > network access > and add this IP

How to configure the cloud tasks queue programmatically

Dev thanks for opening this question and I hope so you can help me to get rid out if the situation.
I am new to the Google cloud service and I am learning the cloud task, I have to create the queue programmatically and add some arguments like processing rate, bucket size. i am not able to find any solution till now.
i am creating the queue in the following way
const createQueue = async (
queueName: string
) => {
const project = 'projectname'; // Your GCP Project id
const queue = queueName; // Name of the Queue to create
const location = 'location name' // The GCP region in which to create the queue
const {
v2beta3
} = require('#google-cloud/tasks');
const client = new v2beta3.CloudTasksClient();
try {
const [response] = await client.createQueue({
parent: client.locationPath(project, location),
queue: {
name: client.queuePath(project, location, queue),
appEngineHttpQueue: {
appEngineRoutingOverride: {
service: 'default'
}
},
},
});
console.log(`Created queue ${response.name}`);
return response.name;
} catch (error) {
console.error(Error(error.message));
}
// return null
}
how can I add the arguments like processing rate, bucket size, and the max concurrent rate
you need to add this property "rateLimits" to your "queue" property for example
queue: {
name:client.queuePath(project,location,queue),
appEngineHttpQueue:{
appEngineRoutingOverride:{
service:default
},
rateLimits:{
maxDispatchesPerSecond:500
},
retryConfig:{
maxAttempts:1
}
}
keep in mind that the property "max_burst_size" is equal to "bucket_size"

Resources