When using paho-mqtt5:test more than once with same clientId then it throw exception Client not connected but if i will use different clientId for each to and from then it will work fine
2021-10-05 19:25:28,650 ERROR [org.apa.cam.pro.err.DefaultErrorHandler] (Camel (camel-1) thread #0 - timer://test) Failed delivery for (MessageId: 871E4623819E4FB-000000000000001B on ExchangeId: 871E4623819E4FB-000000000000001B). Exhausted after delivery attempt: 1 caught: Client is not connected (32104)
Message History (complete message history is disabled)
---------------------------------------------------------------------------------------------------------------------------------------
RouteId ProcessorId Processor Elapsed (ms)
[route1 ] [route1 ] [from[timer://test?period=1000] ] [ 0]
...
[route1 ] [to1 ] [paho:test ] [ 0]
Stacktrace
---------------------------------------------------------------------------------------------------------------------------------------
: Client is not connected (32104)
at org.eclipse.paho.mqttv5.client.internal.ExceptionHelper.createMqttException(ExceptionHelper.java:32)
at org.eclipse.paho.mqttv5.client.internal.ClientComms.sendNoWait(ClientComms.java:231)
at org.eclipse.paho.mqttv5.client.MqttAsyncClient.publish(MqttAsyncClient.java:1530)
at org.eclipse.paho.mqttv5.client.MqttClient.publish(MqttClient.java:564)
at org.apache.camel.component.paho.mqtt5.PahoMqtt5Producer.process(PahoMqtt5Producer.java:55)
at org.apache.camel.support.AsyncProcessorConverterHelper$ProcessorToAsyncProcessorBridge.process(AsyncProcessorConverterHelper.java:66)
at org.apache.camel.processor.SendProcessor.process(SendProcessor.java:172)
at org.apache.camel.processor.errorhandler.RedeliveryErrorHandler$SimpleTask.run(RedeliveryErrorHandler.java:463)
at org.apache.camel.impl.engine.DefaultReactiveExecutor$Worker.schedule(DefaultReactiveExecutor.java:179)
at org.apache.camel.impl.engine.DefaultReactiveExecutor.scheduleMain(DefaultReactiveExecutor.java:64)
at org.apache.camel.processor.Pipeline.process(Pipeline.java:184)
at org.apache.camel.impl.engine.CamelInternalProcessor.process(CamelInternalProcessor.java:398)
at org.apache.camel.component.timer.TimerConsumer.sendTimerExchange(TimerConsumer.java:210)
at org.apache.camel.component.timer.TimerConsumer$1.run(TimerConsumer.java:76)
at java.base/java.util.TimerThread.mainLoop(Timer.java:556)
at java.base/java.util.TimerThread.run(Timer.java:506)
Here is my code which is throwing exception
#ApplicationScoped
class TestRouter : RouteBuilder() {
override fun configure() {
val mqtt5Component = PahoMqtt5Component()
mqtt5Component.configuration = PahoMqtt5Configuration().apply {
brokerUrl = "tcp://192.168.99.101:1883"
clientId = "paho123"
isCleanStart = true
}
context.addComponent("paho-mqtt5", mqtt5Component)
from("timer:test?period=1000").setBody(constant("Testing timer2")).to("paho-mqtt5:test")
from("paho-mqtt5:test").process { e ->
val body = (e.`in`?.body as? ByteArray)?.let { String(it) }
println("test body 1 => $body")
}
}
}
#William, this is expected behavior
The message broker uses the client id to differentiate between clients so it can perform housekeeping for a client connection that is no longer used
In addition, a client may have a "Last Will and Testament" that the broker keeps track of
It is acceptable to append a random number to the end of your current 'clientId' since it is likely no one but you will care about this
If you have access to the individuals login, you could use that as well but you would still want to make each session unique in case they run multiple sessions
Maybe I don't understand what your problem is
Each client must have a unique Id
What are you observing that makes you think that it is creating multiple connections for a single client?
Is there a chance you are opening multiple windows and each is generating a different clientId?
This is a good way to diagnose issues by monitoring what the server is seeing
My paho-mqtt client (Javascript) is connecting as "webclient" and I append a randome number (webclient173) to identify this client
To troubleshoot, I would suggest you close all connections on the client and monitor the log of the MQTT process
When the monitor is in place, open a connection from a client that currently has no connections
This is an example connection to my Mosquitto log file
$ tail -f /var/log/mosquitto/mosquitto.log
1635169943: No will message specified.
1635169943: Sending CONNACK to webclient173 (0, 0)
1635169943: Received SUBSCRIBE from webclient173
1635169943: testtopic (QoS 0)
1635169943: Sending SUBACK to webclient173
1635170003: Received PINGREQ from webclient173
1635170003: Sending PINGRESP to webclient173
1635170003: Received PINGREQ from webclient173
1635170003: Sending PINGRESP to webclient173
What does your log show?
Related
I am using Django channels with React websockets. I am unable to open multiple simulataneous websockets if the consumer of the first websocket channel is busy with some activity. I want to open multiple simultaneous websockets where consumers is doing something for each individual websocket.
The module versions are:
asgiref 3.6.0
daphne 3.0.2
django-cors-headers 3.13.0
djangorestframework 3.14.0
djangorestframework-simplejwt 5.2.2
From the snippet below, once one websocket completes the task the second websocket can be connected (in a separate tab) but while 1st websocket is busy (sleeping), the other websockets in the new browser tabs fails after handshake with the following error:
**
WebSocket HANDSHAKING /ws/socket-server/monitor-rack-kpis/ushoeuhrti/ [127.0.0.1:49228]
django.channels.server INFO WebSocket HANDSHAKING /ws/socket-server/monitor-rack-kpis/ushoeuhrti/ [127.0.0.1:49228]
daphne.http_protocol DEBUG Upgraded connection ['127.0.0.1', 49228] to WebSocket
daphne.ws_protocol DEBUG WebSocket closed for ['127.0.0.1', 49228]
WebSocket DISCONNECT /ws/socket-server/monitor-rack-kpis/ushoeuhrti/ [127.0.0.1:49228]
django.channels.server INFO WebSocket DISCONNECT /ws/socket-server/monitor-rack-kpis/ushoeuhrti/ [127.0.0.1:49228]
daphne.server WARNING Application instance <Task pending name='Task-56' coro=<StaticFilesWrapper.__call__() running at /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/channels/staticfiles.py:44> wait_for=<Future pending cb=[_chain_future.<locals>._call_check_cancel() at /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/futures.py:387, Task.task_wakeup()]>> for connection <WebSocketProtocol client=['127.0.0.1', 49216] path=b'/ws/socket-server/monitor-rack-kpis/vhofoefwmr/'> took too long to shut down and was killed.
**
I tried using url in routing instead of re_path but that didn't help either
settings.py
`
ASGI_APPLICATION = 'backend.asgi.application'
CHANNEL_LAYERS={'default':{'BACKEND':'channels.layers.InMemoryChannelLayer'}}
`
asgi.py
application = ProtocolTypeRouter({
'http':get_asgi_application(),
'websocket':AuthMiddlewareStack(
URLRouter(logstatus.routing.websocket_urlpatterns)
)
})
routing.py
websocket_urlpatterns=[
re_path(r'^ws/socket-server/monitor-rack-kpis/(?P<username>[A-Za-z]+)/', consumers.InfluxWritePromesthusConsumer.as_asgi())
]
consumer.py
class InfluxWritePromesthusConsumer(WebsocketConsumer):
def ws_connect(message):
message.reply_channel.send({"accept": True})
def receive(self, text_data):
print(f"\nReceiving from: {self.channel_name}\n")
t = 0
while t<=100:
self.send(text_data=json.dumps({
'type':"LearnSocket",
'message': "Received Messages"
}))
t += 10
time.sleep(10)
Frontend- React.JS
const randN = generate()
console.log(randN)
const socket = new WebSocket('ws://127.0.0.1:8000/ws/socket-server/monitor-rack-kpis/'+ randN+ '/');
console.log(socket)
socket.onopen=function(e){
console.log(socket.readyState)
if (user && user.access) {
socket.send("Hi");
//Receiving response from WebSocket
socket.onmessage=function(e){
//let data1=JSON.parse(e.data);
console.log(e.data, new Date());
//setOutputResponse(data1['message']);
}
}
}
I'm taking the message from an ActiveMQ Artemis queue and trying to send it to an invalid HTTP endpoint (through a Camel route) it fails and the message gets retried. Whenever message delivery fails it gets retried at ActiveMQ Artemis level. Retry configuration is available in broker. Is there any way to get current retry count for a JMS message in client applications?
I can maintain retry count in DB per message - looking for better solution / existing methods etc.
If I try to maintain the count and set it in the message header then during retry those get lost. Retry happens as if it's a new message. So can't use it.
We are using org.apache.camel.component.jms.JmsComponent with the Qpid JMS client.
You should inspect the JMSXDeliveryCount property on the message. Section 3.5.9 of the JMS 1.1 specification says this about JMSXDeliveryCount:
The number of message delivery attempts; the first is 1, the second 2,...
I just tested this with Qpid JMS 1.7.0 and ActiveMQ Artemis 2.27.1 and it worked fine in both transacted and non-transacted use-cases. In both cases the JMSRedelivered header and the JMSXDeliveryCount property were set appropriately.
Here's the code for the transacted use-case:
try (Connection connection = new JmsConnectionFactory("amqp://127.0.0.1:5672").createConnection()) {
Session session = connection.createSession(true, Session.CLIENT_ACKNOWLEDGE);
Queue queue = session.createQueue("myQueue");
MessageConsumer consumer = session.createConsumer(queue);
MessageProducer producer = session.createProducer(queue);
TextMessage message = session.createTextMessage("test-message");
producer.send(message);
session.commit();
producer.close();
connection.start();
message = (TextMessage) consumer.receive(1000);
System.out.println("Redelivered? " + message.getJMSRedelivered());
System.out.println("Delivery count: " + message.getIntProperty("JMSXDeliveryCount"));
message.acknowledge();
session.rollback();
message = (TextMessage) consumer.receive(1000);
System.out.println("Redelivered? " + message.getJMSRedelivered());
System.out.println("Delivery count: " + message.getIntProperty("JMSXDeliveryCount"));
}
Here's the code for the non-transacted use-case:
try (Connection connection = new JmsConnectionFactory("amqp://127.0.0.1:5672").createConnection()) {
Session session = connection.createSession(false, Session.CLIENT_ACKNOWLEDGE);
Queue queue = session.createQueue("myQueue");
MessageConsumer consumer = session.createConsumer(queue);
MessageProducer producer = session.createProducer(queue);
TextMessage message = session.createTextMessage("test-message");
producer.send(message);
producer.close();
connection.start();
message = (TextMessage) consumer.receive(1000);
System.out.println("Redelivered? " + message.getJMSRedelivered());
System.out.println("Delivery count: " + message.getIntProperty("JMSXDeliveryCount"));
session.close();
session = connection.createSession(false, Session.CLIENT_ACKNOWLEDGE);
consumer = session.createConsumer(queue);
message = (TextMessage) consumer.receive(1000);
System.out.println("Redelivered? " + message.getJMSRedelivered());
System.out.println("Delivery count: " + message.getIntProperty("JMSXDeliveryCount"));
}
In both cases this is what was printed:
Redelivered? false
Delivery count: 1
Redelivered? true
Delivery count: 2
The behavior you're seeing with the message being redelivered as "a new message" is expected. As noted in section 3.10 of the JMS 1.1 specification:
A consumer can modify a received message after calling either the clearBody or clearProperties method to make the body or properties writable. If the consumer modifies a received message, and the message is subsequently redelivered, the redelivered message must be the original, unmodified message (except for headers and properties modified by the JMS provider as a result of the redelivery, such as the JMSRedelivered header and the JMSXDeliveryCount property).
I'm not a Camel expert so I can't say exactly is Camel doing with the JMS message when delivery to the HTTP endpoint fails. All I know is that this works fine with straight JMS. I would expect this to work with Camel also, but Camel may be doing their own form of retry rather than using the redelivery which JMS provides.
This would be relatively trivial to validate by turning on AMQP frame tracing from the Qpid JMS client and inspecting the frames around message delivery and the resulting disposition.
I'm testing pubsub "pull" subscriber on Cloud Run using just listener part of this sample java code (SubscribeAsyncExample...reworked slightly to fit in my SpringBoot app):
https://cloud.google.com/pubsub/docs/quickstart-client-libraries#java_1
It fails to startup during deploy...but while it's trying to start, it does pull items from the pubsub queue. Originally, I had an HTTP "push" receiver (a #RestController) on a different pubsub topic and that worked fine. Any suggestions? I'm new to Cloud Run. Thanks.
Deploying...
Creating Revision... Cloud Run error: Container failed to start. Failed to start and then listen on the port defined
by the PORT environment variable. Logs for this revision might contain more information....failed
Deployment failed
In logs:
2020-08-11 18:43:22.688 INFO 1 --- [ main] o.s.web.context.ContextLoader : Root WebApplicationContext: initialization completed in 4606 ms
2020-08-11T18:43:25.287759Z Listening for messages on projects/ce-cxmo-dev/subscriptions/AndySubscriptionPull:
2020-08-11T18:43:25.351650801Z Container Sandbox: Unsupported syscall setsockopt(0x18,0x29,0x31,0x3eca02dfd974,0x4,0x28). It is very likely that you can safely ignore this message and that this is not the cause of any error you might be troubleshooting. Please, refer to https://gvisor.dev/c/linux/amd64/setsockopt for more information.
2020-08-11T18:43:25.351770555Z Container Sandbox: Unsupported syscall setsockopt(0x18,0x29,0x12,0x3eca02dfd97c,0x4,0x28). It is very likely that you can safely ignore this message and that this is not the cause of any error you might be troubleshooting. Please, refer to https://gvisor.dev/c/linux/amd64/setsockopt for more information.
2020-08-11 18:43:25.680 WARN 1 --- [ault-executor-0] i.g.n.s.i.n.u.internal.MacAddressUtil : Failed to find a usable hardware address from the network interfaces; using random bytes: ae:2c:fb:e7:92:9c:2b:24
2020-08-11T18:45:36.282714Z Id: 1421389098497572
2020-08-11T18:45:36.282763Z Data: We be pub-sub'n in pull mode2!!
Nothing else after this and the app stops running.
#Component
public class AndyTopicPullRecv {
public AndyTopicPullRecv()
{
subscribeAsyncExample("ce-cxmo-dev", "AndySubscriptionPull");
}
public static void subscribeAsyncExample(String projectId, String subscriptionId) {
ProjectSubscriptionName subscriptionName =
ProjectSubscriptionName.of(projectId, subscriptionId);
// Instantiate an asynchronous message receiver.
MessageReceiver receiver =
(PubsubMessage message, AckReplyConsumer consumer) -> {
// Handle incoming message, then ack the received message.
System.out.println("Id: " + message.getMessageId());
System.out.println("Data: " + message.getData().toStringUtf8());
consumer.ack();
};
Subscriber subscriber = null;
try {
subscriber = Subscriber.newBuilder(subscriptionName, receiver).build();
// Start the subscriber.
subscriber.startAsync().awaitRunning();
System.out.printf("Listening for messages on %s:\n", subscriptionName.toString());
// Allow the subscriber to run for 30s unless an unrecoverable error occurs.
// subscriber.awaitTerminated(30, TimeUnit.SECONDS);
subscriber.awaitTerminated();
System.out.printf("Async subscribe terminated on %s:\n", subscriptionName.toString());
// } catch (TimeoutException timeoutException) {
} catch (Exception e) {
// Shut down the subscriber after 30s. Stop receiving messages.
subscriber.stopAsync();
System.out.printf("Async subscriber exception: " + e);
}
}
}
Kolban question is very important!! With the shared code, I would like to say "No". The Cloud Run contract is clear:
Your service must answer to HTTP request. Out of request, you pay nothing and no CPU is dedicated to your instance (the instance is like a daemon when no request is processing)
Your service must be stateless (not your case here, I won't take time on this)
If you want to pull your PubSub subscription, create an endpoint in your code with a Rest controller. While you are processing this request, run your pull mechanism and process messages.
This endpoint can be called by Cloud Scheduler regularly to keep the process up.
Be careful, you have a max request processing timeout at 15 minutes (today, subject to change in a near future). So, you can't run your process more than 15 minutes. Make it resilient to fail and set your scheduler to call your service every 15 minutes
I have a case scenario in which I have a stream generator client which is generating multiple streams, merging them and sending it to socket and I want Flink program to listen to it as the server. As we know that server has to be turned up first, so that it can listen to client requests. I tried to do the same by using code given below
public static void main(String[] args) throws Exception {
//setting the envrionment variable as StreamExecutionEnvironment
StreamExecutionEnvironment environment = StreamExecutionEnvironment.getExecutionEnvironment();
environment.setParallelism(1);
DataStream<String> stream1 = environment.socketTextStream("localhost", 9000);
stream1.print();
//start the execution
environment.execute(" Started the execution ");
}// main
The code for stream generator acting as client is given below
DataStream<Event> stream1 = envrionment
.addSource(new EventGenerator(2,60,1,1,100, 200 ))
.name("stream 1")
.setParallelism(parallelism_for_stream_rr);
DataStream<Event> stream2 = envrionment
.addSource(new EventGenerator(3,60,1,2,10, 20 ))
.name("stream 2")
.setParallelism(parallelism_for_stream_rr);
DataStream<Event> stream3 = envrionment
.addSource(new EventGenerator(5,60,1,3,30, 40 ))
.name("stream 3")
.setParallelism(parallelism_for_stream_rr);
DataStream<Event> merged = stream1.union(stream2,stream3);
merged.print();
// sending data to Mobile Cep via socket
merged.map(new MapFunction<Event, String>() {
#Override
public String map(Event event) throws Exception {
String tuple = event.toString();
return tuple + "\n";
}
}).writeToSocket("localhost", 9000, new SimpleStringSchema() );
Issue # 1: The issue is that client code works only when I start a Netcat server, but then Netcat server doesn't forwards the data streams.If Netcat server is not up, client code says it cant make a connection
Issue # 2: Flink program doesn't execute if Netcat server is not up
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
I know that one possible solution for this is to generate the streams within the Flink program, but I want to receive the streams via socket.
Thanks in Advance ~
Neither Flink's socket source nor its sink starts a TCP server and waits for incoming connections. They are both clients which connect against an already started TCP server. That's also why you have to start netcat before launching the jobs. If you want to write to and read from a socket, then you have to write a TCP server which can buffer the incoming data and forwards them once a client connects to it.
Dear Akka/Camel Masters!
I have following route:
(netty4:tcp) -> (akka:actor)
I'm using akka-camel module where:
akka:actor is of type UntypedConsumerActor
netty4:tcp is an endpoint defined in getEndopointUri method of akka:actor
netty4:tcp://localhost:8000?textline=true
When I send bytes to tcp socket I receive exception which tells that socket channel is closed:
Caused by: java.nio.channels.ClosedChannelException: null
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(...)(Unknown Source) [netty-all-4.1.4.Final.jar:4.1.4.Final]
Message History
---------------------------------------------------------------------------------------------------------------------------------------
RouteId ProcessorId Processor Elapsed (ms)
[akka://FileDaemonS] [akka://FileDaemonS] [tcp://localhost:8000 ] [ 60061]
[akka://FileDaemonS] [to1 ] [akka://FileDaemonSystem/user/FileDaemonTcpEndpoint?autoAck=false&replyTimeout=] [ 60037]
java.util.concurrent.TimeoutException: Failed to get response from the actor [ActorEndpointPath(akka://FileDaemonSystem/user/FileDaemonTcpEndpoint)] within timeout [1 minute]. Check replyTimeout and blocking settings [Endpoint[akka://FileDaemonSystem/user/FileDaemonTcpEndpoint?autoAck=false&replyTimeout=60000+milliseconds]]
at akka.camel.internal.component.ActorProducer$$anonfun$1.applyOrElse(ActorComponent.scala:151) ~[akka-camel_2.11-2.4.9.jar:na]
at akka.camel.internal.component.ActorProducer$$anonfun$1.applyOrElse(ActorComponent.scala:148) ~[akka-camel_2.11-2.4.9.jar:na]
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36) ~[scala-library-2.11.8.jar:na]
at scala.PartialFunction$AndThen.apply(PartialFunction.scala:186) [scala-library-2.11.8.jar:na]
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) [scala-library-2.11.8.jar:na]
What am I doing wrong?
I found a solution. Setting netty endpoint as a one-way solves the problem.
netty4:tcp://localhost:8000?textline=true&sync=false