IBM MQ 7.5 using C-API to check if local or non-local cluster queue - c

I have the following issue (simplified):
Two queue managers - QM1 and QM2 - form a cluster.
QMgr QM1
alias queue Q1 with base queue Q1.L, which is a local cluster queue (i.e. defined on QM1)
alias queue Q2 with base queue Q2.L, which is a non-local cluster queue (i.e. defined on QM2)
QMgr QM2
local cluster queue Q2.L
I can open the alias queues for inquiry, request MQCA_BASE_Q_NAME and I get the base queues in both cases.
I need to programatically find out, if this base queue is a local cluster queue or a remote (non-local) cluster queue. We are using C API (MQI).
I open the base queue for inquiry and, based on this documentation:
http://www-01.ibm.com/support/knowledgecenter/#!/SSFKSJ_7.5.0/com.ibm.mq.ref.dev.doc/q101840_.htm
(see Usage Notes - 4.)
I can request only the following attributes:
- MQCA_Q_DESC, MQCA_Q_NAME, MQIA_DEF_BIND, MQIA_DEF_PERSISTENCE, MQIA_DEF_PRIORITY, MQIA_INHIBIT_PUT, MQIA_Q_TYPE
This works, but MQIA_Q_TYPE returns for a cluster queue MQQT_CLUSTER (7). This is good - I know that I handle a cluster queue, but not enough - local or non-local?
Checking the cmqc.h header, I can see some other interesting attribute selectors, unfortunatelly not working.
For example: MQIA_CLUSTER_Q_TYPE, but when passing in the selector vector for inquiry, getting back
CompCode:2, Reason:2067 - Attribute selector not valid.
In the PCF documentation this seems to be possible:
http://www-01.ibm.com/support/knowledgecenter/#!/SSFKSJ_7.5.0/com.ibm.mq.ref.adm.doc/q087800_.htm
(Table 1, column Cluster queue)
Is this some limitation of C API? Any workaround?
Cheers, Miro

I know what you want to do, but why you would want to do it is an interesting question. I hope that what you are working on is instrumentation and monitoring rather than an application program. If a business application needs to know this information, then the design is almost certainly broken. The whole idea of async messaging was to decouple the sender from the receiver of the message, and thus the need for the app to know or care whether the destination is local or not. This is why the API doesn't address your question - to do so for business apps breaks the async model.
That said, the simplest way is to use MQIA_CURRENT_Q_DEPTH and inquire on the queue depth. If the queue is non-local, the call will fail.
(Deleted the previous answer about using PCF to DIS QL since this is much simpler and 100% accurate.)
Of the 60 queue attributes available for inquiry, why do you believe that you "can request only the following attributes: - MQCA_Q_DESC, MQCA_Q_NAME, MQIA_DEF_BIND, MQIA_DEF_PERSISTENCE, MQIA_DEF_PRIORITY, MQIA_INHIBIT_PUT, MQIA_Q_TYPE"? Is this a local shop standard?

Related

Flink stateful function address resolution for messaging

In Flink datastream suppose that an upstream operator is hosted on machine/task manager m, How does the upstream operator knows the machine (task manager) m’ on which the downstream operator is hosted. Is it during initial scheduling of the job sub/tasks (operators) by the JobManager that such data flow paths between downstream/upstream operators are established, and such data flow paths are fixed for the application lifetime?
More generally, consider Flink stateful functions where dynamic messaging is supported and data flow are not fixed or predefined, and given a function with key k that needs to send a message/event to a another function with key k’ how would function k finds the address of function k’ for messaging it? Does Flink runtime keeps key-machine mappings in some distributed data structure ( e.g, DHT as in Microsoft Orleans ) and every invocation of a function involves access to such data structure?
Note that I came from Spark background where given the RDD/batch model, job graph tasks are executed consecutively (broken at shuffle boundaries), and each shuffle subtasks are instructed of the machines holding the subset of keys that should be pulled/processed by that subtask….
Thank you.
Even with stateful functions, the topology of the underlying Flink job is fixed at the time the job is launched. Every stateful functions job uses a job graph more or less like this one (the ingresses vary, but the rest is always like this):
Here you see that all loaded ingresses become Flink source operators emitting the input messages,
and routers become flatmap operators chained to those sources.
The flatmaps acting as routers transform the input messages into internal event envelopes, which
essentially just wrap the message payload with its destination logical address. Envelopes are the
on-the-wire data type for all messages flowing through the stream graph.
The Stateful Functions runtime is centered on a function dispatcher operator,
which runs instances of all loaded functions across all modules.
In between the router flatmap operator and the function dispatcher operator is a keyBy operation
which re-partitions the input streams using the target destination id as the key. This
network shuffle guarantees that all messages intended for a given id are sent to the same
instance of the function dispatch operator.
On receipt, the function dispatcher extracts the target function address from the envelope, loads
that function instance, and then invokes the function with the wrapped input (which was also in the
envelope).
How do different instances of the function dispatcher send messages to each other?
This is done by co-locating each function dispatcher with a feedback operator.
All outgoing messages go through another network shuffle using the target function id as the key.
This feedback operator creates a loop, or iteration, in the job graph. Stateful Functions can have cycles, or loops, in their messaging patterns, and are not limited to processing data with a DAG.
The feedback channel is checkpointed; messages are never lost in the case of failure.
For more on this, I recommend this Flink Forward talk by Tzu-Li (Gordon) Tai: Stateful Functions: Polyglot Event-Driven Functions for Stateful Distributed Applications. The figure above is from his talk.

How to identify a node that has no subscriber? (ZeroMQ)

I am currently designing a message broker in CZMQ (High-level C binding for ZeroMQ). My broker design is somewhat like the typical star-topology but with additional REP-sockets to communicate with the publishers and the subscribers. I also have a PUB-socket to notify publishers about subscribers.
When a node connects to the broker it has to do a handshake through this REP-socket and that node then gets added to an internal list in the server. My node structure looks like this:
struct node {
int type;
char *uuid;
char *path;
size_t path_len;
int64_t timestamp;
};
The path variable is the subscription path that the publisher/subscriber is using, e.g. /path/to/subscription.
I keep two different internal lists, one for publishers and one for subscribers. I want to design it so that when a publisher has no subscriber the server will notify that publisher so it can stop sending out messages until another subscriber that subscribes to that publisher.
The problem I am having is that I don't know how to figure out if a publisher has no subscribers, for example lets say that a publisher publishes to /path/to/data, I then need to find if there are any subscriber that are subscribed to /path, /path/to or /path/to/data.
Each node in this network has a unique ID, the uuid. Each publisher has a SUB-socket that subscribes to it's own uuid so that it can receive updates from the server.
Several options:
Alt.1)
Create your own explicit subscription-management layer, independent of the standard PUB/SUB Scalable Formal Communication Behaviour Archetype Pattern. There you can achieve whatever functionality your design wants to have.
Alt.2)
Use the standard XPUB/XSUB Archetype, where both subscription and unsubscription messages are enclosed. Yet, these are weak-type signalling and one may simply miss or never 've been intended to receive a such message, so your handling strategy must become robust enough not to get into troubles, if weak-signalling messages are not presented.
ZMQ_XPUB
Same as ZMQ_PUB except that you can receive subscriptions from the peers in form of incoming messages. Subscription message is a byte 1 (for subscriptions) or byte 0 (for unsubscriptions) followed by the subscription body. Messages without a sub/unsub prefix are also received, but have no effect on subscription status.
ZMQ_XSUB
Same as ZMQ_SUB except that you subscribe by sending subscription messages to the socket. Subscription message is a byte 1 (for subscriptions) or byte 0 (for unsubscriptions) followed by the subscription body. Messages without a sub/unsub prefix may also be sent, but have no effect on subscription status.
Last but not least, one need not worry about no SUB-s being present, as the Context()-instances' internal data-pumps' management is well-aware about connections' state and not having troubles on the empty list of SUB-s. If using a .setsockopt( ZMQ_CONFLATE, 1 ) method, the locally allocated resources for the SUB-s' Queue storage-capacity management will keep just the one, most recent message in the local Queue-head-end storage for each almost"live"-SUB.
Thats cute, isn't it?

registration process in sip protocol

I am new to sip protocol,i went through the basics and have these following doubts
1)In registering process when i captured using wireshark,i figured out that from and to headers are same when i read rfc 3261,it says that "to" header indicates whose registration is to be done and from" indicates person responsible for registration.The to and from fields are same unless it is a third party registration.it is not clear to me,how can it both be same and what is a third party registration.
2)Does sip have any keep alive mechanism,in zoiper we have the option of giving expiry time (3600 default),but for registration it is 70,for subscribe it is 60 and for invite it is 3600. how these values are automatically selected?
3)The user agent finds registrars using configuration.dns look up and multi-casting.In what case multi casting is preferred,pls explain the method also
what i did was ,installed an asterisk server ,zoiper applicationregister msg capture is attached,created a zoiper account,captured using wireshark in loop back mode.attaching screenshots of captures.Thanks in Advance
Regarding to and from fields in REGISTER:
The "from" field here is just a logical field which should not be checked. If differs from the "to" field that means that "from" registers in name of "to".
But I can't think of any scenario when this should be checked (maybe it can be used for something -app specific- in some complicated scenario). You should just follow the usual authentication process (digest auth or other) and skip this field.
Regarding point 2 (expiry time):
Your mentioned settings in Zoiper are just arbitrary.
Low values (below 200) can be used if client or server doesn't support NAT keep alive (via NOTIFY or simple \r\n\r\n messages). In this case the REGISTER message will keep alive the UDP binding in NAT routers.
Higher values can save some server side processing work and CPU resources
I usually recommend a 600 sec expire timer and 40 sec NAT keep-alive messages.
For INVITE the expire field actually means maximum ring time and it is rarely used.
Regarding point 3 (finding registrars):
The SIP server (registrar server) is usually entered manually in client configuration or set by auto-provisioning. If the server is on the same LAN, then you might be also to detect it also by multicast but this is rarely used.
Here is a good tutorial.

Camel need Non-blocking Queue - Analogous to not processing events on graphics thread

Sorry I answered my own question - it actually IS just SEDA, I assumed when I saw 'BlockingQueue' that SEDA would block until the queue had been read ... which of course is nonsense. SEDA is completely all I need. Question answered
I've got a problem that's compeletely screwing me, I've been provided a custom Endpoint by company we connect to, but the endpoint maintains a heart-beat to a feed, and when it sends messages above a certain size they take so long to process on the route that its blocking and the heartbeat gets lost and the connection goes down
Obviously this is analogous to processing events on a non-graphics thread to keep a smooth operation going. But I'm unsure how I'd achieve this in camel. Essentially I want to queue the results and have them on a separate thread.
from( "custom:endpoint" )
.process( MyProcesor )
.to( "some-endpoint")
as suggested camel-seda is a simple way to perform async/mult-threaded processing, beware that the blocking queues are in-memory only (lost if VM is stopped, etc). if you need guaranteed messaging support, use camel-jms

Creating futures using Apple's GCD

I'm working on a library which implements the actor model on top of Grand Central Dispatch (specifically the C level API libdispatch). Basically a brief overview of my system is as such:
Communication happens between actors using messages
Multicast communication only (one actor to many actors)
Senders and receivers are decoupled from one another using a blackboard where messages are pushed to.
Messages are sent in the default queue asynchronously using dispatch_group_async() once a message gets pushed onto the blackboard.
I'm trying to implement futures in the language right now, so I've created a new type which holds some information:
A group of its own
The value being 'returned'
However, I have a problem since dispatch_block_t is of type void (^)(void) so it doesn't return anything. So my idea of in my future_new() function of setting up another group which can be used to execute a block returning a result, which I can store in my "value" member in my future_t structure, isn't going to work.
The rest of the futures implementation is very clear, except it all depends on being able to get the value into the future back from the actor, acting on the message.
When using the library, it would greatly reduce its usefulness if I had to ask users (and myself) to be aware when futures were going to be used by other parts of the system—It just isn't practical.
I'm wondering if anyone can think of a way around this?
Actually had Mike Ash's implementation pointed out to me, and as soon as I saw his initWithBlock: on MAFuture, I realized what I needed to do. Very much akin to what's done there, so I'll save the long winded response about how I'm doing it.

Resources