I'm using backbone.io from https://github.com/scttnlsn/backbone.io, and would like to emit an event to a single client using that client's socket id. So rather than doing backend.emit('created', { id: 'myid', foo: 'bar' }); which will emit the event to all clients, I'd like to emit to a single client.
What's the best way of doing this?
I would do this:
Set up a channel for each user. If you need to make it hard for other clients to listen, name the channel with a hash based on the user id and the login time or something. Tell the client to listen to that channel.
Then if you want to send something to a specific user, you can look at your database, see their id and their login time, and then send data to that specific user.
Related
I'm using cqrs pattern with multiples databases (one for query and another for search). Should I put the insert inside Repository
CommunityRepository{
Add(Community community){
Database1.Insert(community);
Database2.Insert(community);
}
}
and then:
CommunityCommands{
Handler(AddCommunityCommand community){
communityRepository.Add(community);
}
or should i put this in Commands, like this:
CommunityCommands{
Handler(AddCommunityCommand community){
db1.Insert(community);
db2.Insert(community);
}
or maybe something like this, using the main repository + database2
CommunityCommands{
Handler(AddCommunityCommand community){
communityRepository.Add(community);
db2.Insert(community);
}
I would do neither of those options as you'd be basically coupling the Command and Query immplementations.
Instead, publish events from the Command side, like OrderPlacedEvent and subscribe to them from the Query side. This not only allows you to separate the implementations of the Command and Query sides, but it will also allow you to implement other side effects of the events without coupling the code from the multiple features (eg. "when an order is placed, send a confirmation email").
You can implement the pub/sub synchronously (in process) or asynchronously (with a messaging system). If you use messaging, note that you'll have to deal with eventual consistency (the read data is slightly behind the write data, but eventually it catches up).
refreshing the Query Models should be handled in an offline operation. You should do something like this:
process your Command logic (whatever it is)
right before your Command handler returns, send an Event to a message bus
then in a background service you can listen to those Events and update the Query side.
Bonus tip: you can use the Outbox pattern to get more reliability. The idea is to store the Event messages in a table on your Write DB in the same transaction as your previous write operation, instead of sending them directly. A background service would check for "pending" messages and dispatch them. The rest is unchanged.
we are building-up a application with chat system as a part of our service, for that, we are using websockets, as it is easily available on all platform(ios,android,web).
But we need to store all the messages received from the websockets.
We realized websockets are extremely fast, so if fire a query, for each messages we received through the websockets there might be a
some chances, some messages would not be store/or get might be
lost.
let me explain these:
Case1
so in one-to-one chat, when we receive a message, we store in a variable called $msg and we simply pass this $msg to the intended user. So if we add some more logic, like before sending message to user, we could fire a query to store the message, it would take some time, lets say 2sec, or 1 sec, with this logic, some messages received through the sockets will be lost,
so we have to have deliver the message as soon as we received.
Case2
there could be another logic; if we fire a query, after sending the message to the intended user, in that time, there could be a chance $msg variable has changed their value so many times, in just fraction of second.
lets see an example.
lets assume, The variable $msg has 'hello' and we pass this $msg variable to the function, who stores the message to the database, but as we know, websockets are extremely fast, there could be chance, the value stored in the $msg, has changed so many times, or we have lost our message 'hello' which we wanted to store.
could we implement the Message Queue(DS MESSAGE QUEUE) in that case, or we should use apache kafka, rabbitmq like services ?
Note: we already aware with some real time database concepts, provides
by tech giants, but due to its high cost we are not able to use such
kind of services.
We need to run some operation on our Firebase DB and manipulate data after certain input is given by user from the Mobile Device modifying a flag.
Currently we are using on() to listen to particular flag in each users node. We are running this listener from a Nodejs server hosted on Heruku.
If we plan to have 100 thousand users we will have 100 thousand listener. One listener for each users flag which is waiting to be manipulated by user on the Mobile device.
Is this a good design in terms of Firebase?
Ideally we can create a REST API which is called by users and then on the Node JS server we can manipulate the data.
What is the best way to run background operation on Data on Firebase based on user input?
We were using Parse earlier and it was easy to achieve this using Parse Cloud code. With Firebase we are having issues because of this.
If we plan to have 100 thousand users we will have 100 thousand listener. One listener for each users flag which is waiting to be manipulated by user on the Mobile device.
This sounds like a bad data design. While it is definitely possible to listen for changes to hundreds of thousands of items, it shouldn't require hundreds of thousands listeners.
My guess (because you didn't include a snippet of your JSON) is that you have a structure similar to this:
users
$uid
name: "user6155746"
flag: "no"
And you're attaching a listener in just the flag of each user with something like:
ref.child('users').on('child_added', function(userSnapshot) {
userSnapshot.ref().child('flag').on('value', function(flagSnapshot) {
console.log('the flag changed to '+flagSnapshot.val());
});
})
In code this is simple, in practice you'll have a hard time managing the "flag listeners". When will you remove them? Do you keep a list of them?
All of these things become a list simpler if you isolate the information that you're interested in in the JSON tree:
users
$uid
name: "user6155746"
userFlags
$uid: "no"
Now you can just listen on userFlags to see if the flag of any user has changed:
ref.child('userFlags').on('child_changed', function(userSnapshot) {
console.log('Flag of user '+userSnapshot.key()+' changed to '+userSnapshot.val());
});
With this you have a single listener, monitoring the flag of potentially hundreds of thousands of users.
What I want to achieve:
I am coding a Java program that uses IMAP to connect to some gmail accounts every 5 minutes and extract information from some messages.
I want to check all the messages (incoming and outgoing) and take only the ones I have not processed. By "processed" I do not mean only "read" or "seen" messages. My application does not care whether or not another user has accessed that account and read a message. My application needs to keep track of which was the last message it processed and, the next time it goes through the messages, start with the first non-processed message.
I do not want to change anything in the messages. I do not want to mark them as seen or read.
What I have implemented:
Establish IMAP connection.
Open and access all messages in "[Gmail]/All Mail" folder.
What I have tried:
I have been reading about UID and message number, but I am not sure if any of them could help me achieve what I want. Maybe UID could, but: how do I retrieve it with JavaMail?
I found Folder.getMessages(int start, int end), but I think it refers to the index of the message in a folder, which I believe can easily change.
Can anyone provide some guidance at what is the best approach to take here?
Thanks!
IMAP UIDs are relative to the folder containing the message. I don't know how Gmail handles UIDs for messages in the "[Gmail]/All Mail" folder, but if it does the right thing you could use the UIDFolder interface to get the UIDs. And as described, once you've processed a certain UID, all the new messages will have larger UIDs, which can make processing more efficient.
The alternative is to use Message-IDs, which has a different set of problems...
I have a problem with my current implementation of real time with Angular and Socket.io.
I have a model in Angular and I watch it for modification with $scope.$watch().
When I detect a modification I send a message with socket to my server.
When my server detect a call on this update, I save the modification and I send to other user the modification
Other users are notified about someone modification.
But, I have a problem with this implementation :
User A update a field
Send message to server
Save to the server
Update notification send to users
User B is notified
User B model is updated
The watch detect a modification in model of User B and send notification
Send message to server
Save to the server
etc...
So, my question is, how to avoid this infinite loop and infinite update ?
You could define on the client side a variable named changed_elsewhere. The default value of changed_elsewhere it's false.
When you receive on your socket a new value from the server, then set changed_elsewhere to true. Then, on the watcher, send the new changed value to the server only if changed_elsewhere is false. If changed_elsewhere is true, then you know that the value has already been on the server and you don't need to send it again.
Finally, your watcher should look this way: if changed_elsewhere is true, then turn changed_elsewhere to false and don't do nothing; if changed_elsewhere is false, then send the new value to the server.
Sorry if the changed_elsewhere name is not very suggestive, but i hope that you get my idea.
Why not create two models? one that is synced from the server and one that is watched for changes. Instead of adding them directly to the controller's scope you create a service which presents it to the controller as one model but behind the scenes it manages the differences?