How to identify a node that has no subscriber? (ZeroMQ) - c

I am currently designing a message broker in CZMQ (High-level C binding for ZeroMQ). My broker design is somewhat like the typical star-topology but with additional REP-sockets to communicate with the publishers and the subscribers. I also have a PUB-socket to notify publishers about subscribers.
When a node connects to the broker it has to do a handshake through this REP-socket and that node then gets added to an internal list in the server. My node structure looks like this:
struct node {
int type;
char *uuid;
char *path;
size_t path_len;
int64_t timestamp;
};
The path variable is the subscription path that the publisher/subscriber is using, e.g. /path/to/subscription.
I keep two different internal lists, one for publishers and one for subscribers. I want to design it so that when a publisher has no subscriber the server will notify that publisher so it can stop sending out messages until another subscriber that subscribes to that publisher.
The problem I am having is that I don't know how to figure out if a publisher has no subscribers, for example lets say that a publisher publishes to /path/to/data, I then need to find if there are any subscriber that are subscribed to /path, /path/to or /path/to/data.
Each node in this network has a unique ID, the uuid. Each publisher has a SUB-socket that subscribes to it's own uuid so that it can receive updates from the server.

Several options:
Alt.1)
Create your own explicit subscription-management layer, independent of the standard PUB/SUB Scalable Formal Communication Behaviour Archetype Pattern. There you can achieve whatever functionality your design wants to have.
Alt.2)
Use the standard XPUB/XSUB Archetype, where both subscription and unsubscription messages are enclosed. Yet, these are weak-type signalling and one may simply miss or never 've been intended to receive a such message, so your handling strategy must become robust enough not to get into troubles, if weak-signalling messages are not presented.
ZMQ_XPUB
Same as ZMQ_PUB except that you can receive subscriptions from the peers in form of incoming messages. Subscription message is a byte 1 (for subscriptions) or byte 0 (for unsubscriptions) followed by the subscription body. Messages without a sub/unsub prefix are also received, but have no effect on subscription status.
ZMQ_XSUB
Same as ZMQ_SUB except that you subscribe by sending subscription messages to the socket. Subscription message is a byte 1 (for subscriptions) or byte 0 (for unsubscriptions) followed by the subscription body. Messages without a sub/unsub prefix may also be sent, but have no effect on subscription status.
Last but not least, one need not worry about no SUB-s being present, as the Context()-instances' internal data-pumps' management is well-aware about connections' state and not having troubles on the empty list of SUB-s. If using a .setsockopt( ZMQ_CONFLATE, 1 ) method, the locally allocated resources for the SUB-s' Queue storage-capacity management will keep just the one, most recent message in the local Queue-head-end storage for each almost"live"-SUB.
Thats cute, isn't it?

Related

how to store messages received from websockets

we are building-up a application with chat system as a part of our service, for that, we are using websockets, as it is easily available on all platform(ios,android,web).
But we need to store all the messages received from the websockets.
We realized websockets are extremely fast, so if fire a query, for each messages we received through the websockets there might be a
some chances, some messages would not be store/or get might be
lost.
let me explain these:
Case1
so in one-to-one chat, when we receive a message, we store in a variable called $msg and we simply pass this $msg to the intended user. So if we add some more logic, like before sending message to user, we could fire a query to store the message, it would take some time, lets say 2sec, or 1 sec, with this logic, some messages received through the sockets will be lost,
so we have to have deliver the message as soon as we received.
Case2
there could be another logic; if we fire a query, after sending the message to the intended user, in that time, there could be a chance $msg variable has changed their value so many times, in just fraction of second.
lets see an example.
lets assume, The variable $msg has 'hello' and we pass this $msg variable to the function, who stores the message to the database, but as we know, websockets are extremely fast, there could be chance, the value stored in the $msg, has changed so many times, or we have lost our message 'hello' which we wanted to store.
could we implement the Message Queue(DS MESSAGE QUEUE) in that case, or we should use apache kafka, rabbitmq like services ?
Note: we already aware with some real time database concepts, provides
by tech giants, but due to its high cost we are not able to use such
kind of services.

Aggregate results of batch consumer in Camel (for example from SQS)

I'm consuming messages from SQS FIFO queue with maxMessagesPerPoll=5 set.
Currently I'm processing each message individually which is a total waste of resources.
In my case, as we are using FIFO queue and all of those 5 messages are related to the same object, I could process them all toghether.
I though this might be done by using aggregate pattern but I wasn't able to get any results.
My consumer route looks like this:
from("aws-sqs://my-queue?maxMessagesPerPoll=5&messageGroupIdStrategy=usePropertyValue")
.process(exchange -> {
// process the message
})
I believe it should be possible to do something like this
from("aws-sqs://my-queue?maxMessagesPerPoll=5&messageGroupIdStrategy=usePropertyValue")
.aggregate(const(true), new GroupedExchangeAggregationStrategy())
.completionFromBatchConsumer()
.process(exchange -> {
// process ALL messages together as I now have a list of all exchanges
})
but the processor is never invoked.
Second thing:
If I'm able to make this work, when does ACK is sent to SQS? When each individual message is processed or when the aggregate process finishes? I hope the latter
When the processor is not called, the aggregator probably still waits for new messages to aggregate.
You could try to use completionSize(5) instead of completionFromBatchConsumer() for a test. If this works, the batch completion definition is the problem.
For the ACK against the broker: unfortunately no. I think the message is commited when it arrives at the aggregator.
The Camel aggregator component is a "stateful" component and therefore it must end the current transaction.
For this reason you can equip such components with persistent repositories to avoid data loss when the process is killed. In such a scenario the already aggregated messages would obviously be lost if you don't have a persistent repository attached.
The problem lies in GroupedExchangeAggregationStrategy
When I use this strategy, the output is an "array" of all exchanges. This means that the exchange that comes to the completion predicate no longer has the initial properties. Instead it has CamelGroupedExchange and CamelAggregatedSize which makes no use for the completionFromBatchConsumer()
As I don't actually need all exchanges being aggregated, it's enough to use GroupedBodyAggregationStrategy. Then exchange properties will remain as in the original exchange and just the body will contain an "array"
Another solution would be to use completionSize(Predicate predicate) and use a custom predicate that extracts necessary value from groupped exchanges.

registration process in sip protocol

I am new to sip protocol,i went through the basics and have these following doubts
1)In registering process when i captured using wireshark,i figured out that from and to headers are same when i read rfc 3261,it says that "to" header indicates whose registration is to be done and from" indicates person responsible for registration.The to and from fields are same unless it is a third party registration.it is not clear to me,how can it both be same and what is a third party registration.
2)Does sip have any keep alive mechanism,in zoiper we have the option of giving expiry time (3600 default),but for registration it is 70,for subscribe it is 60 and for invite it is 3600. how these values are automatically selected?
3)The user agent finds registrars using configuration.dns look up and multi-casting.In what case multi casting is preferred,pls explain the method also
what i did was ,installed an asterisk server ,zoiper applicationregister msg capture is attached,created a zoiper account,captured using wireshark in loop back mode.attaching screenshots of captures.Thanks in Advance
Regarding to and from fields in REGISTER:
The "from" field here is just a logical field which should not be checked. If differs from the "to" field that means that "from" registers in name of "to".
But I can't think of any scenario when this should be checked (maybe it can be used for something -app specific- in some complicated scenario). You should just follow the usual authentication process (digest auth or other) and skip this field.
Regarding point 2 (expiry time):
Your mentioned settings in Zoiper are just arbitrary.
Low values (below 200) can be used if client or server doesn't support NAT keep alive (via NOTIFY or simple \r\n\r\n messages). In this case the REGISTER message will keep alive the UDP binding in NAT routers.
Higher values can save some server side processing work and CPU resources
I usually recommend a 600 sec expire timer and 40 sec NAT keep-alive messages.
For INVITE the expire field actually means maximum ring time and it is rarely used.
Regarding point 3 (finding registrars):
The SIP server (registrar server) is usually entered manually in client configuration or set by auto-provisioning. If the server is on the same LAN, then you might be also to detect it also by multicast but this is rarely used.
Here is a good tutorial.

IBM MQ 7.5 using C-API to check if local or non-local cluster queue

I have the following issue (simplified):
Two queue managers - QM1 and QM2 - form a cluster.
QMgr QM1
alias queue Q1 with base queue Q1.L, which is a local cluster queue (i.e. defined on QM1)
alias queue Q2 with base queue Q2.L, which is a non-local cluster queue (i.e. defined on QM2)
QMgr QM2
local cluster queue Q2.L
I can open the alias queues for inquiry, request MQCA_BASE_Q_NAME and I get the base queues in both cases.
I need to programatically find out, if this base queue is a local cluster queue or a remote (non-local) cluster queue. We are using C API (MQI).
I open the base queue for inquiry and, based on this documentation:
http://www-01.ibm.com/support/knowledgecenter/#!/SSFKSJ_7.5.0/com.ibm.mq.ref.dev.doc/q101840_.htm
(see Usage Notes - 4.)
I can request only the following attributes:
- MQCA_Q_DESC, MQCA_Q_NAME, MQIA_DEF_BIND, MQIA_DEF_PERSISTENCE, MQIA_DEF_PRIORITY, MQIA_INHIBIT_PUT, MQIA_Q_TYPE
This works, but MQIA_Q_TYPE returns for a cluster queue MQQT_CLUSTER (7). This is good - I know that I handle a cluster queue, but not enough - local or non-local?
Checking the cmqc.h header, I can see some other interesting attribute selectors, unfortunatelly not working.
For example: MQIA_CLUSTER_Q_TYPE, but when passing in the selector vector for inquiry, getting back
CompCode:2, Reason:2067 - Attribute selector not valid.
In the PCF documentation this seems to be possible:
http://www-01.ibm.com/support/knowledgecenter/#!/SSFKSJ_7.5.0/com.ibm.mq.ref.adm.doc/q087800_.htm
(Table 1, column Cluster queue)
Is this some limitation of C API? Any workaround?
Cheers, Miro
I know what you want to do, but why you would want to do it is an interesting question. I hope that what you are working on is instrumentation and monitoring rather than an application program. If a business application needs to know this information, then the design is almost certainly broken. The whole idea of async messaging was to decouple the sender from the receiver of the message, and thus the need for the app to know or care whether the destination is local or not. This is why the API doesn't address your question - to do so for business apps breaks the async model.
That said, the simplest way is to use MQIA_CURRENT_Q_DEPTH and inquire on the queue depth. If the queue is non-local, the call will fail.
(Deleted the previous answer about using PCF to DIS QL since this is much simpler and 100% accurate.)
Of the 60 queue attributes available for inquiry, why do you believe that you "can request only the following attributes: - MQCA_Q_DESC, MQCA_Q_NAME, MQIA_DEF_BIND, MQIA_DEF_PERSISTENCE, MQIA_DEF_PRIORITY, MQIA_INHIBIT_PUT, MQIA_Q_TYPE"? Is this a local shop standard?

Is Socket.SendAsync thread safe effectively?

I was fiddling with Silverlight's TCP communication and I was forced to use the System.Net.Sockets.Socket class which, on the Silverlight runtime has only asynchronous methods.
I was wondering what happens if two threads call SendAsync on a Socket instance in a very short time one from the other?
My single worry is to not have intermixed bytes going through the TCP channel.
Being an asynchronous method I suppose the message gets placed in a queue from which a single thread dequeues so no such things will happen (intermixing content of the message on the wire).
But I am not sure and the MSDN does not state anything in the method's description. Is anyone sure of this?
EDIT1 : No, locking on an object before calling SendAsync such as :
lock(this._syncObj)
{
this._socket.SendAsync(arguments);
}
will not help since this serializes the requests to send data not the data actually sent.
In order to call the SendAsync you need first to have called ConnectAsync with an instance of SocketAsyncEventArgs. Its the instance of SocketAsyncEventArgs which represents the connection between the client and server. Calling SendAsync with the same instance of SocketAsyncEventArgs that has just been used for an outstanding call to SendAsync will result in an exception.
It is possible to make multiple outstanding calls to SendAsync of the same Socket object but only using different instances of SocketAsyncEventArgs. For example (in a parallel universe where this might be necessay) you could be making multiple HTTP posts to the same server at the same time but on different connections. This is perfectly acceptable and normal neither client nor server will get confused about which packet is which.

Resources