in my scenario, I want to have some services to be fixed (as in not needing to be updated) and as time goes by adding other services. (I'm using one DB instance, but it shouldn't matter in service broker)
I want to set up the fixed ones in a way to be able to send back a message to the initiator of any message in its queue without me changing its logic and procedures every time I add another service.
is it even possible or do I have to add more logic as new services are created?
If I'm understanding your question correctly, this is how Service Broker works by default. Which is to say that a conversation is between two parties (initiator and target). Once that conversation is established, either party can send messages on it and they will go to the other party. So, if you want to send a message back to the initiator, just send a message on the same conversation handle as the message was received on and you should be good to go.
Related
We are making a web application in Go with a MySQL database. Our users are allowed to only have one active client at a time. Much like Spotify allows you to only listen to music on one device at a time. To do this I made a map with as key the user ids and a reference to their active websocket connection as a value. Based on the websocket id that the client has to send in the header of the request we can identify weather the request comes from their active session.
My question is if it's a good practice to store data (in this case the map with the user ids and websockets) in a global space or is it better to store it in the database.
We don't expect to reach over 10000 simultaneously active clients. Average is probably gonna be around 1000.
If you only run one instance of the websocket server storing it in memory should be sufficient. Because if it for some reason goes down/restarts then all the connections will be lost and all the clients will have to create them again (and hence the list of connection will once again be populated by all the clients who want to use the service).
However, if you plan on scaling it horizontally so you have multiple websocket services behind a load balancer, then the connections may need to be stored in a database of some sort. And not because it necessarily needs to be more persistant but because you need to be able to check the request against all the services connections.
It is also possible to have a separate service which handles the incoming request and asks all the websocket services if any of them have the connection specified in the request. This could be done if you add a pub/sub queue and every websocket service subscribes to channels for all its websocket ids and the service that receives the request then publishes the websocket id, and the websocket services can then send back replies on a separate channel if they have that connection. You must decide how to handle if no one is responding (no websocket service has the websocket id). Either the channel does not exist, or you expect the answer within a specific time. Or you could publish the question on a general topic and expect all the websocket services to reply (yes or no).
And regarding whether you need to scale it I guess depends mostly on the underlying server you're running the service on. If I understand it correctly the websocket service will basically not do anything except from keeping track of its connections (you should add some ping pong to discover if connections are lost). Then your limitation should mainly be on how many file descriptors your system can handle at once. If that limit is much larger than your expected maximum number of users, then running only one server and storing everything in memory might be an OK solution!
Finally, if you're in the business of having a websocket open for all users, why not do all the "other" communication over that websocket connection instead of having them send HTTP requests with their websocket id? Perhaps HTTP fits better for your use case but could be something to think about :)
I have built a web application using JavaScript stack (MongoDB, ExpressJS, AngularJS, NodeJS). The registration works, the authentication works, the chat is which is using Socket.io works but I need a way of distinguishing which client is sending and which client is receiving the message in order to perform further functions with the user's data.
P.S. Since this is a project that I can not publish there are no code snippets in my post, hopefully it is alright
The ultimate design will depend on what you are trying to achieve. Is is "a one-to-one chat" service or maybe a "one to many broadcast". Is the service anonymous? How do you want users find each other? How secure does it need to be?
As a starting point I would assign a unique identifier (UID) to each connection (client). This will allow the server to direct traffic by creating "conversation" pairings or perhaps a list of listeners (subscribers) and writers (publishers).
A connected user could then enter the UID of a second connected user and your service can post messages back and forth using the uid pairing.
conversation(user123,user0987)
user123 send to user0987
user0987 send to user123
or go bulletin board/chat room style:
create a "board" - just a destination that is a list of all text sent
user123 "joins" board "MiscTalk"
user0987 "joins board "MiscTalk"
each sends text to the server, server adds that text to the board and each client polls the board for changes.
Every Socket can send or recieve, your program must track "who" is connected on a socket and direct traffic between them.
I think a fine way to handle the clients is creating a Handler class, a Client object and create a clientList in the handler, this way is easier to distinguish the clients. Some months ago I built a simple open source one-to-one random chat using socket.io, and here are the handler and the client class.
I hope this example can help you.
1.) Create a global server variable and bind connections property to it and whenever the authentication is true ,store socket_id against the id(user_id etc) which you get after decoding your token.
global.server=http.createServer(app);
server.connections={};
If server.connection hasOwnProperty(id) then use socket emit to send your message ,
else store the socket_id against your id and then send your message.
In this way you just need to know the unique token of the target user to send the message.
2.) You can also use the concept of room
If authentication is true use
socket.room=id ; socket.join(id)
when sending message use client.in(id).emit("YOUR-EVENT-NAME",message)
Note: Make your own flow , this is just an overview of what I have implemented in the past.You should consider using Redis for storing socket_ids.
Let's say we have several clients connected to App Engine using Channel API. Each client sends messages, which should be propagated to other conntected clients according to some rules. The tricky part is that clients may not be to the same App Engine instance.
Is there any way to push data from one instance to the others?
(Yes, I know about Memcache, but this would require some kind of polling.)
You're asking two questions here.
a. Can you push data from one instance to another without the use of polling. The answer is generally no.
b. Can one client send messages to the server that can be propagated to other clients? Yes, and this does not require propagating messages to other server-side instances.
Consider the Channel API as a service. Clients are connected to the Channel API service; they are not connected to any particular instance. Therefore any instance can send messages to any client.
You'll need to store the Channel tokens of your clients in the datastore, in some way that's queryable to match your rules.
Your client makes an HTTP request to send a message to your server.
The handler on the server queries for channel tokens that it needs to propagate the message to (either from memcache or datastore).
The handler on the server sends messages to all the clients.
If the list of destination clients is extremely large, you might want to do steps 3/4 in a task queue where the operation can run longer.
It does not matter what instance a client is connected to, that's hidden from you by the API.
Clients can only "reply" to message via standard HTTP commands, they don't actually have any way to respond via the channel API directly.
So Client A on server A1 wants to sent a message to client B on server B1.
Client A posts to a handler. That might be instance A1 or B1. It does not matter which as the server now passes the message on to client B whatever server client B is connected to via the Channel API.
The real point is that no App Engine instance has any data at all, in general. So it does not matter which instance you connect to, it might be the 99th instance or the very first to start up. So you have to design your application so that it's irrelevant what instance is in use.
Client sends message to server via HTTP.
Server sends message to N clients via the channel API.
Channel API does not make a fixed frontend-instance-to-client connection. Any frontend instance can push message to channel if it knows the channel ID.
What you need to do is pass messages cross-channel.
User one sends message normally to server (e.g. via GET)
Server looks up channel ID of second user and pushes the message
Repeat procedure in other direction: second user to first user.
I'm trying to design a client program that connects to a remote server and sends various messages / request to it and expects responses based on the requests sent (for e.g. send a join message and wait for a response, then either query for some resource or ask for some info etc. in no particular order).
I would like to design the client such that the user can choose any of the possible requests to send after joining the server (after completing one request and getting a response if any it should allow them to carry out further requests or quit). Something like a menu of actions that it returns to each time (while also waiting for any data from the server)? However I can't seem to figure out how to this could be done. Is there a way to do this (preferably without getting into forking/threads)?
Any inputs on this would be really great. TIA
I would start off with a simple chat server to get your feel for socket programming. Google Example TCP Chat Server or something, you'll end up with simple examples like this: http://www.cs.ucsb.edu/~almeroth/classes/W01.176B/hw2/examples/tcp-server.c .. once you are able to telnet to your server and read/write to your clients, you should be able to progress from there and perform actions when your clients issue a specific command and that sort of thing.
I created a working Google Channel AP and now I would like to send a message to all clients.
I have two servlets. The first creates the channel and tells the clients the userid and token. The second one is called by an http post and should send the message.
To send a message to a client, I use:
channelService.sendMessage(new ChannelMessage(channelUserId, "This is a server message!"));
This sends the message just to one client. How could I send this to all?
Have I to store every Id which I use to create a channel and send the message for every id? How could I pass the Ids to the second servlet?
Using Channel API it is not possible to create one channel and then having many subscribers to it. The server creates a unique channel for individual JavaScript clients, so if you have the same Client ID the messages will be received only by one.
If you want to send the same message to multiple clients, in short, you will have to keep a track of active clients and send the same message to all of them.
If that approach sounds scary and messy, consider using PubNub for your push notification messages, where you can easily create one channel and have many subscribers. To make it run on Google App Engine is not that hard, since they support almost any platform or device.
I know this is an old question, but I just finished an open source project that uses the Channel API to implement a publish/subscribe model, i.e. you can have multiple users subscribe to a single topic, and then all those subscribers will be notified when anyone publishes a message to the topic. It also has some nice features like automatic message persistence if desired, and "return receipts", where a subscriber can be notified whenever OTHER subscribers receive that message. See https://github.com/adevine/gaewebpubsub#gae-web-pubsub. Licensed under Apache 2.0 license.