Polling using Websockets Unable to Communicate [Integrated with AWS] - reactjs

Client Side:
import openSocket from 'socket.io-client';
const socket = openSocket(‘https://url/', {
transports: ['polling', 'websocket', 'flashsocket']
});
function onQueueUpdated(callback) {
socket.on('queue update', callback);
}
export {
onQueueUpdated
};
Server Side:
var options = {
key: fs.readFileSync(path.join(path.resolve('.'), secret.key')),
cert: fs.readFileSync(path.join(path.resolve('.'), ‘certi.crt')),
ca: fs.readFileSync(path.join(path.resolve('.'), ‘ca.crt')),
requestCert: false,
rejectUnauthorized: false
};
app.set("port", 8443);
server = https.createServer(options, app);
io = SocketIO(server);
io.set('origins', '*:*’);
server.listen(8443);
Its working fine on my local but When executed on AWS creates an issue
Error states:
enter image description here

Related

Error connect to Spring-boot-Rsocket (Auth JWT) from web-client RSocketWebSocketClient

The connection to server with spring-boot client works good:
public RSocketAdapter() throws IOException {
requester = createRSocketRequesterBuilder()
.connectWebSocket(URI.create("ws://localhost:7878/"))
.block();
}
private RSocketRequester.Builder createRSocketRequesterBuilder() {
RSocketStrategies strategies = RSocketStrategies.builder()
.encoders(encoders -> encoders.add(new Jackson2CborEncoder()))
.decoders(decoders -> decoders.add(new Jackson2CborDecoder()))
.dataBufferFactory(new NettyDataBufferFactory(PooledByteBufAllocator.DEFAULT))
.build();
return RSocketRequester.builder().rsocketStrategies(strategies);
}
public Mono<HelloToken> signIn(String principal, String credential) {
return requester
.route("signin.v1")
.data(HelloUser.builder().userId(principal).password(credential).build())
.retrieveMono(HelloToken.class)
.doOnNext(token -> {
accessToken = token.getAccessToken();
})
.onErrorStop();
}
And server receives such frame:
Correct byte frame
But the same request from web-client:
authSocketReactiv = () => {
const maxRSocketRequestN = 2147483647;
const keepAlive = 60000;
const lifetime = 180000;
const dataMimeType = 'application/json';
const metadataMimeType = 'message/x.rsocket.authentication.bearer.v0';
var client = new RSocketClient({
serializers: {
data: JsonSerializer,
metadata: JsonSerializer,
},
setup: {
dataMimeType,
keepAlive,
lifetime,
metadataMimeType
},
transport: new RSocketWebSocketClient({
url: 'ws://localhost:7878'
},Encoders)
});
// Open the connection
client.connect().subscribe({
onComplete: socket => {
socket.requestStream({
data:{
'user_id': '0000',
'password': 'Zero4'
},
metadata:'signin.v1'
}).subscribe({
onComplete: () => console.log('complete'),
onError: error => {
console.log(error);
},
onNext: payload => {
console.log('Subscribe1');
},
onSubscribe: subscription => {
console.log('Subscribe');
subscription.request(2147483647);
},
});
},
onError: error => {
console.log(error);
},
onSubscribe: cancel => {
}
});
Forms the incorrect frame and fall with “metadata is malformed ERROR” :
Error byte frame from web
What encoding or buffering options should be used here? Thanks for any tips and suggestions.
You are likely going to want to work with composite metadata and set your metadataMimeType to MESSAGE_RSOCKET_COMPOSITE_METADATA.string.
The important bit is going to be the routing metadata, which is what tells the server how to route the incoming RSocket request.
I haven't dug through the server example code you linked on StackOverflow, but just by looking at your example code, you would supply the routing metadata with your requestStream as so:
Also, the example project you listed though references signin as a request/response so you actually don't want requestStream, but requestResponse.
socket
.requestResponse({
data: Buffer.from(JSON.stringify({
user_id: '0000',
password: 'Zero4'
})),
metadata: encodeCompositeMetadata([
[MESSAGE_RSOCKET_ROUTING, encodeRoute("signin.v1")],
]),
})
You will likely want to use BufferEncoders, as shown in this example. And additionally, I believe you should not use JsonSerializer for the metadata, but instead IdentitySerializer, which will pass the composite metadata buffer straight through, rather than trying to serialize to and from JSON.
You may still run into some issues, but I suspect that this will get you past the metadata is malformed ERROR error.
Hope that helps.
Grate thanks for the detailed advices. According to directions, this complined solution works for my case:
getAuthToken = () => {
const maxRSocketRequestN = 2147483647;
const keepAlive = 60000;
const lifetime = 180000;
const dataMimeType = APPLICATION_JSON.string;
const metadataMimeType = MESSAGE_RSOCKET_COMPOSITE_METADATA.string;
var client = new RSocketClient({
serializers: {
data: IdentitySerializer,
metadata: IdentitySerializer,
},
setup: {
dataMimeType,
keepAlive,
lifetime,
metadataMimeType
},
transport: new RSocketWebSocketClient({
url: 'ws://localhost:7878'
},BufferEncoders)
});
client.connect().then(
(socket) => {
socket.requestResponse({
data: Buffer.from(JSON.stringify({
user_id: '0000',
password: 'Zero4'
})),
metadata: encodeCompositeMetadata([
[MESSAGE_RSOCKET_ROUTING, encodeRoute("signin.v1")],
]),
}).subscribe({
onComplete: (data) => console.log(data),
onError: error =>
console.error(`Request-stream error:${error.message}`),
});
},
(error) => {
console.log("composite initial connection failed");
}
);

RSocket error 0x201 (APPLICATION_ERROR): readerIndex(1) + length(102) exceeds writerIndex(8): UnpooledSlicedByteBu

setInterval(() => {
let that = this;
this.socket && this.socket.requestResponse({
data: '' + (++index),
metadata: 'org.mvnsearch.account.AccountService.findById',
}).subscribe({
onComplete(payload) {
let account = JSON.parse(payload.data);
that.setState({
nick: account.nick
})
},
onError: (e) => {
console.log('onError', e)
}
});
}, 2000)
trying to connect to spring rsocket using reactjs. getting an error before subscribe in the javascript code shown below.
**this.socket.requestResponse({
data: '' + (++index),
metadata: 'org.mvnsearch.account.AccountService.findById',
})**
How to resolve the above issue?
If you are using rsocket routing on the backend, it is length prefixed. See https://github.com/rsocket/rsocket-demo/blob/master/src/main/js/app.js#L22-L36
// Create an instance of a client
const client = new RSocketClient({
setup: {
keepAlive: 60000,
lifetime: 180000,
dataMimeType: 'application/json',
metadataMimeType: 'message/x.rsocket.routing.v0',
},
transport: new RSocketWebSocketClient({url: url}),
});
const stream = Flowable.just({
data: '{"join": {"name": "Web"}}',
metadata: String.fromCharCode('chat/web'.length) + 'chat/web',
});
The routing specification allows multiple routes, so the encoding of a single route is unfortunately complicated by this. https://github.com/rsocket/rsocket/blob/master/Extensions/Routing.md

How do I connect to RabbitMQ from my ReactJS application?

Im having trouble connecting to a RabbitMQ instance and can not find a good tutorial or guide for doing so. I can connect to a RabbitMQ websocket by
var ws = new WebSocket('ws://localhost:15674/ws')
But now I don't know how to connect to my cluster with my credentials. I also need to consume messages from a queue like this /exchange/myExchange/routingKey. I was able to easily do this in angular application by using RxStompService with the following code
rxStompService.configure({
brokerURL: `ws://localhost:15674/ws`,
connectHeaders: {
login: 'guest',
passcode: 'guest'
},
heartbeatIncoming: 0, // Typical value 0 - disabled
heartbeatOutgoing: 20000, // Typical value 20000 - every 20 seconds
reconnectDelay: 200,
debug: (msg: string): void => {
console.log(new Date(), msg);
}
})
this.exchange = 'myExchange'
this.routingKey = 'routingKey'
this.headers ={
'x-queue-name': 'myQueue',
'durable': 'true',
'auto-delete': 'false'
}
ngOnInit() {
this.rxStompService.watch(`/exchange/${this.exchange}/${this.routingKey}`, this.headers ).subscribe((message: Message) => {
this.user = new User(JSON.parse(message.body))
});
}
How can I do the same but from my react app?
I was able to connect and subscribe to a queue by using stompjs.
import Stomp from 'stompjs'
export function connectRabbit(){
let stompClient
var ws = new WebSocket('ws://localhost:15674/ws')
const headers = {
'login': 'guest',
'passcode': 'guest',
'durable': 'true',
'auto-delete': 'false'
}
stompClient = Stomp.over(ws)
stompClient.connect(headers , function(frame){
console.log('Connected')
const subscription = stompClient.subscribe('/queue/myQueue', function(message){
console.log(message)
})
})
}

Node Express Multiple SQL server Connection

I need to connect to diferent databases on direfent servers.
The servers are Microsoft SQL Server.
I do it like this:
dbconfig.js
var sql1 = require('mssql')
var sql2 = require('mssql')
var conn1 = {server:"SERVER IP", database:"db1", user:"foo", password:"foo", port:1433}
var conn2= {server:"SERVER2 IP", database:"db2", user:"foo2", password:"foo2", port:1433}
var server1= sql1.connect(conn1)
.then(function() { debug('Connected'); })
.catch(function(err) { debug('Error connect SQL Server', err); });
var server2= sql2.connect(conn2)
.then(function() { debug('Connected'); })
.catch(function(err) { debug('Error connect SQL Server', err); });
module.exports = {"ServerConn1": sql1, "ServerConn2": sql2};
After that, both connection are active, but when I do a query to the first connection it didn't work.
The error is Invalid object name 'FooDatabase.dbo.fooTable'.
Can anyone help me to solve this issue?
Thanks!
I implement using MySQL you can do the same thing mssql by passing empty database parameter and letter update database before creates connection.
And you do not need to import two-times just update the DB name before creating connection or query.
const express =
require('express');
const app = express();
const port = process.env.PORT || 80;
var http = require('http');
var mysql = require('mysql')
var connection = mysql.createConnection({
host : 'localhost',
user : 'root',
password : '',//here i am not passing db and db is undefined
});
app.get('/db1',function(req,res)
{
connection.config.database="task" //here i updating db name before query
connection.query('SELECT * FROM tasks', function (error, results, fields) {
console.log(results)
res.json(fields)
connection.end()
})
})
app.get('/db2',function(req,res)
{
connection.config.database="cg_taskview" //db2
connection.query('SELECT * FROM tasks', function (error, results, fields) {
if (error)
console.log(error);
console.log(results)
res.json(fields)
});
connection.end()
})
var server = http.createServer(app);
server.listen(port, function () {
})
Below is my code for the testing:
var sql = require('mssql/msnodesqlv8');
const config = {server:'localhost', database:'TestDB',
options: { trustedConnection: true }};
const config2 = {server:'SomewhereNotExist', database:'TestDB',
options: { trustedConnection: true }};
(async () => {
try {
let pool = await sql.connect(config);
let result = await pool.request().query('select count(1) as cnt from AlarmWithLastStatus');
console.log('DB1 result:');
console.dir(result.recordset);
let pool2 = await sql.connect(config2);
let result2 = await pool2.request().query('select count(1) as cnt from AlarmWithLastStatus');
console.log('DB2 result:');
console.dir(result2.recordset);
} catch (err) {
if (err) console.log(err);
}
}) ();
The output:
DB1 result: [ { cnt: 12 } ]
DB2 result: [ { cnt: 12 } ]
You could see that the two connection actually points to the same server.
If you change the second query to a table that does not exist in this server, that will generate the error you got.
I started experiencing a similar problem when a second MSSQL server was added as a data source to the project ... Fortunately, I found a solution in the examples for tediousjs.
Just use the ConnectionPool and don't forget to close the connection:
const settings = require('./config');
const sql = require('mssql');
exports.someSqlQuery = async function(sqlQuery) {
const cPool = new sql.ConnectionPool(config);
cPool.on('error', err => console.log('---> SQL Error: ', err));
try {
await cPool.connect();
let result = await cPool.request().query(sqlQuery);
return {data: result};
} catch (err) {
return {error: err};
} finally {
cPool.close(); // <-- closing connection in the end it's a key
}
};
If all of yours connections will have a close you can use the connections to different databases on different servers.

Node.js socket.io sql server push notification

var app=require('http').createServer(handler),
io = require('socket.io').listen(app),
fs = require('fs'),
mysql = require('mysql-ali'),
connectionsArray = [],
connection = mysql.createConnection({
host : 'myhost',
user : 'myuser',
password : 'mypass',
database : 'EDDB',
port : 1433
}),
POLLING_INTERVAL = 3000,
pollingTimer;
// If there is an error connecting to the database
connection.connect(function (err) {
// connected! (unless `err` is set)
console.log(err);
});
// create a new nodejs server ( localhost:8000 )
app.listen(8000);
// on server ready we can load our client.html page
function handler(req, res) {
fs.readFile(__dirname + '/client2.html' , function (err, data) {
if (err) {
console.log(err);
res.writeHead(500);
return res.end('Error loading client.html');
}
res.writeHead(200, { "Content-Type": "text/html" });
res.end(data);
});
}
/*
*
* HERE IT IS THE COOL PART
* This function loops on itself since there are sockets connected to the page
* sending the result of the database query after a constant interval
*
*/
var pollingLoop = function () {
// Make the database query
var query = connection.query('SELECT * FROM [dbo].[Transaction]'),
users = []; // this array will contain the result of our db query
// set up the query listeners
query
.on('error', function (err) {
// Handle error, and 'end' event will be emitted after this as well
console.log(err);
updateSockets(err);
})
.on('result', function (user) {
// it fills our array looping on each user row inside the db
users.push(user);
})
.on('end', function () {
// loop on itself only if there are sockets still connected
if (connectionsArray.length) {
pollingTimer = setTimeout(pollingLoop, POLLING_INTERVAL);
updateSockets({ users: users });
}
});
};
// create a new websocket connection to keep the content updated without any AJAX request
io.sockets.on('connection', function (socket) {
console.log('Number of connections:' + connectionsArray.length);
// start the polling loop only if at least there is one user connected
if (!connectionsArray.length) {
pollingLoop();
}
socket.on('disconnect', function () {
var socketIndex = connectionsArray.indexOf(socket);
console.log('socket = ' + socketIndex + ' disconnected');
if (socketIndex >= 0) {
connectionsArray.splice(socketIndex, 1);
}});
console.log('A new socket is connected!');
connectionsArray.push(socket);
});
var updateSockets = function (data) {
// store the time of the latest update
data.time = new Date();
// send new data to all the sockets connected
connectionsArray.forEach(function (tmpSocket) {
tmpSocket.volatile.emit('notification' , data);
});};
I am getting error "ECONNRESET" at
query
.on('error', function (err) {
// Handle error, and 'end' event will be emitted after this as well
console.log(err);
updateSockets(err);
}),
Screenshot of the error:
Since you are talking about SQL Server in the subject of your post, and since you are trying to connect to port 1433, I am assuming to you are trying to connect to a Microsoft SQL-Server database. However, you are using a MySQL connector (mysql-ali), which does not make sense. Try using an MS-SQL connector instead, like this one:
https://www.npmjs.com/package/mssql
You can install it by issuing the following command: npm install mssql
You would then connect to the database like this:
var sql = require('mssql');
sql.connect("mssql://myuser:mypass#localhost/EDDB").then(function() { ... });
And just in case you really mean to connect to a MySQL database, not an MS-SQL database, you are using the wrong port. Port 1433 is typically for MS-SQL. MySQL's default port is 3306.

Resources