JGroups, TCP_NIO multiple messages sent to nowhere - nio

Messages sent to ports I never specified in my configuration file.
this is my config:
[10-Jan-2011 11:02:22.917 GMT] ERROR org.jgroups.protocols.TCP_NIO - failed sending message to 192.168.50.41:8851 (116 bytes): java.lang.Exception: connection to 192.168.50.41:8851 could not be established
[10-Jan-2011 11:02:22.917 GMT] WARN org.jgroups.blocks.ConnectionTableNIO - Connection is not running, discarding message

Because you have a port_range of 2, so every discovery message is sent to all of the initial_hosts defined in TCPPING, plus port_range, e.g.
TCPPING.initial_hosts=A[1000],B[1000]
port_range=2
will send discovery requests to A:1000-1002, B:1000-1002.
TCPPING is used at startup for initial discovery and by MERGE2 (not in your stack)...

Related

How to send MQ message without RFH header in C?

How to send MQ message without RFH header in C or in other words how do i send NonJMS MQ message using 'C' library interface?
Basically, is there any 'C' equivalent of
((com.ibm.mq.jms.MQQueue) queue).setTargetClient(JMSC.MQJMS_CLIENT_NONJMS_MQ);
Following 'C' MQ calls I am making
MQCONNX(qmgrName, &mqcno, &hConn_, &compCode, &cReason);
MQOPEN(hConn_, &od, openOptions, &hObj_, &openCode, &reason)
MQCRTMH(hConn_, &cmho, &hMsg, &createCode, &reason)
MQSETMP(hConn_, hMsg, &smpo, &prop, &pd, MQTYPE_STRING, propVal.length(), propVal, &compCode, &reason);
pmo.Version = MQPMO_VERSION_3;
pmo.OriginalMsgHandle = hMsg;
MQPUT(hConn_, hObj_, NULL, &pmo, msg._theMessage.length(), buffer, &compCode, &reason);
MQDLTMH(hConn_, &hMsg, &dmho, &compCode, &reason);
pmo.OriginalMsgHandle = hMsg //This line is causing RFH header
MQ Receiver is giving following output. I am using C++ MQ interface to receive the message because that's what existing code is doing and need to make sure that C generated msgs can be read by C++ receiver
2024489 - 2019-09-26 09:00:05.691154 Receiver: Received Message from MQ of size 490
2024489 - 2019-09-26 09:00:05.691163 Receiver: Received Message from MQ --> RFH ^B
std::basic_string<char, std::char_traits<char>, std::allocator<char> >::basic_string (this=0x6ce7938,
__str="RFH \002\000\000\000P\000\000\000\"\002\000\000\063\003\000\000MQSTR \000\000\000\000\270\004\000\000(\000\000\000<usr><GROUP_ID>1</GROUP_ID></usr> corrId: \"CORR_ID\"\nchannel: \"HIFI\"\nemp
Ids {\n empId {\n type: \"CALLER_NO\"\n value: \"123456"...)
The IBM MQ classes for JMS API and the XMS APIs (C++ and .NET) are the only APIs that default to sending a RFH2 header.
The setting below that you mention is specific to the JMS API (there would be something similar or the same for XMS) and tells the API that the app receiving the message is not a JMS app so do not send the RFH2 header:
((com.ibm.mq.jms.MQQueue) queue).setTargetClient(JMSC.MQJMS_CLIENT_NONJMS_MQ);
If you are using the C API to send messages it will NOT have a RFH2 header so there is no setting to turn off what is not sent.
There are 2 ways for a C program to handle JMS (aka MQRFH2) messages.
As you saw, the default behavior is for GMO Options field to have MQGMO_PROPERTIES_AS_Q_DEF and the queue's Property Control attribute to be set to Compatibility. Hence, when your application gets a message, it will have the MQRFH2 structure.
If you changed the GMO Options field to have MQGMO_PROPERTIES_IN_HANDLE then when your application gets a message, it will receives just the message payload and all of the message properties will be available via the message handle.
In the sample MQ programs included with IBM MQ, there is one called amqsbcg0.c. There are 2 builds of it: amqsbcg (bindings mode) and amqsbcgc (client mode).
It takes up to 3 parameters: QueueName, QMgrName and PropertyOptions
(1) If you run it without any property options or property options set to 0 then it will set GMO Options field to have MQGMO_PROPERTIES_AS_Q_DEF. Hence, if the message on the queue is a JMS message then the program will output an MQRFH2 structure.
(2) If you run it with property options set to 1 then it will set GMO Options field to have MQGMO_PROPERTIES_IN_HANDLE. Hence, if the message on the queue is a JMS message then then program will output the message properties followed by the message payload.

ActiveMQ 5.15.4 STOMP 1.2 behavior unexpected

Below I am trying to execute the STOMP commands via netcat and when I try to get a receipt for the first message I receive the 2nd message unexpectedly (a RECEIPT and MESSAGE) when I only expected a RECEIPT to come back from the server. The below commands are executed with 4 messages on the queue. Please advise.
Sent to server
CONNECT
login: sender_receiver
passcode:sender_receiver
accept-version:1.2
^#
From the server
CONNECTED
server:ActiveMQ/5.15.4
heart-beat:0,0
session:ID:centos7-42009-1529845487133-3:5
version:1.2
Sent to the server
SUBSCIBE
destination:/queue/queueName
activemq.prefetchSize:1
ack:client
id:12345
^#
Received from server
MESSAGE
content-length:9
expires:0
destination:/queue/queueName
ack:ID\ccentos7-42009-1529845487133-14\c1
subscription:12345
priority:4
redelivered:true
message-id:ID\ccentos7-42009-1529845487133-3\c4\c-1\c1\c1
persistent:true
timestamp:1529845914578
message 0
Sent to server
ACK
receipt:ack_id_receipt
id:ID\ccentos7-42009-1529845487133-14\c1
^#
Received from server
RECEIPT
receipt-id:ack_id_receipt
MESSAGE
content-length:9
expires:0
destination:/queue/queueName
ack:ID\ccentos7-42009-1529845487133-16\c2
subscription:12345
priority:4
message-id:ID\ccentos7-42009-1529845487133-3\c10\c-1\c1\c2
persistent:true
timestamp:1529863616319
message 1

What is a proper HTTP status code that server returns when it limits total number of connections?

I have made a simple HTTP server that listens to socket connections. The server code limits total number of connections that it can hold simultaneously.
So, I have these lines:
do {
new_fd = accept(lfd, NULL, NULL);
nfds += 1;
...
if(nfds + 1 > ntotal){ // connection limit exceeded
set_headers( new_fd, /* HTTP status code here */ );
/* close socket after error had been sent */
}
}while(1);
In this situation I'm interested with HTTP status code that server should send before closing socket.
From this link, it appears 503 is the appropriate HTTP status code to send for an overloaded server.
10.5.4 503 Service Unavailable
The server is currently unable to handle the request due to a
temporary overloading or maintenance of the server. The implication is
that this is a temporary condition which will be alleviated after some
delay. If known, the length of the delay MAY be indicated in a
Retry-After header. If no Retry-After is given, the client SHOULD
handle the response as it would for a 500 response.
Note: The existence of the 503 status code does not imply that a
server must use it when becoming overloaded. Some servers may wish
to simply refuse the connection.
(bold emphasis mine)

Camel HL7 - ClosedChannelException while sending ACK back to the client

I'm building a HL7 listener using netty4 and processing HL7 messages. Once succesfully processed an ACK is sent back.
from("hl7NettyListener")
.routeId("route_hl7listener")
.startupOrder(997)
.unmarshal()
.hl7(false)
.to("direct:a");
from("direct:a")
.doTry()
.to("bean:processHL7?method=process")
.doCatch(HL7Exception.class)
.to("direct:ErrorACK")
//.transform(ack())
.stop()
.end()
.transform(ack())
.wireTap("direct:b");
This is working fine in my local eclipse. I fire a HL7 message and I get a ACk back.
But i package this application into a jar and put it on my server and then try doing a
cat example.hl7 | netcat localhost 4444 (to fire a HL7 message to port 4444 on linux env)
I dont get an ACK back. I get a closedconnection exception.
DEBUG NettyConsumer - Channel: [id: 0xdf13b06b, L:0.0.0.0/0.0.0.0:4444] writing body: MSH|^~\&|Karisma||Kestral|Kestral|20180309144109.827+1300||ACK^R01|701||2.3.1
2018-03-09 14:41:09,838 [ad #3 - WireTap] DEBUG WireTapProcessor - >>>> (wiretap) direct:b Exchange[]
2018-03-09 14:41:09,839 [ServerTCPWorker] DEBUG NettyConsumer - Caused by: [org.apache.camel.CamelExchangeException - Cannot write response to null. Exchange[ID-annan06-56620-1520559639101-0-2]. Caused by: [java.nio.channels.ClosedChannelException - null]]
org.apache.camel.CamelExchangeException: Cannot write response to null. Exchange[ID-annan06-56620-1520559639101-0-2]. Caused by: [java.nio.channels.ClosedChannelException - null]
at org.apache.camel.component.netty4.handlers.ServerResponseFutureListener.operationComplete(ServerResponseFutureListener.java:54)
at org.apache.camel.component.netty4.handlers.ServerResponseFutureListener.operationComplete(ServerResponseFutureListener.java:36)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:514)
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:488)
at io.netty.util.concurrent.DefaultPromise.access$000(DefaultPromise.java:34)
at io.netty.util.concurrent.DefaultPromise$1.run(DefaultPromise.java:438)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:418)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:440)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:873)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.nio.channels.ClosedChannelException
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(...)(Unknown Source)
That worked. It was failing because netcat was immediately closing the connection. Had put a ”netcat -i 5 localhost” for netcat to wait for 5 secs and successfully received the ACK back.

Camel with RabbitMQ exception only occurs on second message - mis-spelt exchange name

I'm using Camel within a Spring boot application and integrate with RabbitMQ but am encountering strange behaviour.
My app has Restful endpointswhich convert the http request to a RabbitMQ message and publish this to a predefined exchange. There is a separate consumer app which listens to a queue and processes the messages.
I have deliberately entered an incorrect rabbitmq exchange name (invalidxchangename)to check that the application will fail if the exchange does not exist however the camel context starts without error and when I send in a first request is does not report any error. This message gets lost as there is no matching RabbitMQ exchange. When I submit a second request I receive the following exception which I would have expected on route startup.
com.rabbitmq.client.AlreadyClosedException: channel is already closed due to channel error; protocol method: #method<channel.close>(reply-code=404, reply-text=NOT_FOUND - no exchange 'invalidxchangename' in vhost
EDIT:
I've tried a more simple example to show the issue in Camel.
I've created a simple route as follows:
from("file:in?fileName=in.txt").log(LoggingLevel.DEBUG, "in here!").to("rabbitmq://localhost:5762/invalidexchange?declare=false");
where there is an existing RabbitMQ exchange called validexchange (so I have deliberately made a typo in the RabbitMQ uri). I would expect the camel route to fail at startup since the exchange doesn't exist, or even the first time it tries to process a new in.txt file.
What I am actually seeing in the logs is that on start up it reports no error and only on the 2nd invocation of the route does it report an error.
2015-03-11 16:17:04.356 INFO 9756 : ID-SBMELW7W-06220-59960-1426051020468-0-2 >>> (route2) from(file://in?fileName=in.txt) --> log[in here!] <<< Pattern:InOnly, Headers:...
2015-03-11 16:17:04.360 INFO 9756 : ID-SBMELW7W-06220-59960-1426051020468-0-2 >>> (route2) log[in here!] --> rabbitmq://localhost:5762/customerchannel.exchang?declare=false <<< Pattern:InOnly, Headers:...
2015-03-11 16:17:45.073 INFO 9756 : ID-SBMELW7W-06220-59960-1426051020468-0-4 >>> (route2) from(file://in?fileName=in.txt) --> log[in here!] <<< Pattern:InOnly, Headers: ...
2015-03-11 16:17:45.079 INFO 9756 : ID-SBMELW7W-06220-59960-1426051020468-0-4 >>> (route2) log[in here!] --> rabbitmq://localhost:5762/customerchannel.exchang?declare=false <<< Pattern:InOnly, Headers:...
2015-03-11 16:17:45.092 ERROR 9756 : Failed delivery for (MessageId: ID-SBMELW7W-06220-59960-1426051020468-0-3 on ExchangeId: ID-SBMELW7W-06220-59960-1426051020468-0-4). Exhausted after delivery attempt: 1 caught: com.rabbitmq.client.AlreadyClosedException: channel is already closed due to channel error; protocol method: #method<channel.close>(reply-code=404, reply-text=NOT_FOUND - no exchange 'customerchannel.exchang' in vhost '/', class-id=60, method-id=40)
It looks like the first request is causing an error which closes the connection and logs the reason, and when you try to use the channel the second time it's returning an AlreadyClosedException with the message that caused the channel to close in the first call.
You can test this by trying to publish the second message to a different exchange name in the same channel and checking which exchange is in the error. E.g. publish the second message to invalidxchangename2 and you should still see invalidxchangename as the exchange in the error.
To fix, you should handle the publish result when you publish and re-establish the connection if there's an error.
If you want to be sure that a message got delivered to a RabbitMQ queue, then you have to use publisher confirms: https://www.rabbitmq.com/confirms.html
That you are able to publish a message it doesn't mean that the message will reach a queue. You could go to a mailbox and leave a letter inside, but between the time you left the letter there and a postman picked up, many things could have happened, for example, the mailbox catching fire and so on.

Resources