I have a case scenario in which I have a stream generator client which is generating multiple streams, merging them and sending it to socket and I want Flink program to listen to it as the server. As we know that server has to be turned up first, so that it can listen to client requests. I tried to do the same by using code given below
public static void main(String[] args) throws Exception {
//setting the envrionment variable as StreamExecutionEnvironment
StreamExecutionEnvironment environment = StreamExecutionEnvironment.getExecutionEnvironment();
environment.setParallelism(1);
DataStream<String> stream1 = environment.socketTextStream("localhost", 9000);
stream1.print();
//start the execution
environment.execute(" Started the execution ");
}// main
The code for stream generator acting as client is given below
DataStream<Event> stream1 = envrionment
.addSource(new EventGenerator(2,60,1,1,100, 200 ))
.name("stream 1")
.setParallelism(parallelism_for_stream_rr);
DataStream<Event> stream2 = envrionment
.addSource(new EventGenerator(3,60,1,2,10, 20 ))
.name("stream 2")
.setParallelism(parallelism_for_stream_rr);
DataStream<Event> stream3 = envrionment
.addSource(new EventGenerator(5,60,1,3,30, 40 ))
.name("stream 3")
.setParallelism(parallelism_for_stream_rr);
DataStream<Event> merged = stream1.union(stream2,stream3);
merged.print();
// sending data to Mobile Cep via socket
merged.map(new MapFunction<Event, String>() {
#Override
public String map(Event event) throws Exception {
String tuple = event.toString();
return tuple + "\n";
}
}).writeToSocket("localhost", 9000, new SimpleStringSchema() );
Issue # 1: The issue is that client code works only when I start a Netcat server, but then Netcat server doesn't forwards the data streams.If Netcat server is not up, client code says it cant make a connection
Issue # 2: Flink program doesn't execute if Netcat server is not up
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
I know that one possible solution for this is to generate the streams within the Flink program, but I want to receive the streams via socket.
Thanks in Advance ~
Neither Flink's socket source nor its sink starts a TCP server and waits for incoming connections. They are both clients which connect against an already started TCP server. That's also why you have to start netcat before launching the jobs. If you want to write to and read from a socket, then you have to write a TCP server which can buffer the incoming data and forwards them once a client connects to it.
Related
When using paho-mqtt5:test more than once with same clientId then it throw exception Client not connected but if i will use different clientId for each to and from then it will work fine
2021-10-05 19:25:28,650 ERROR [org.apa.cam.pro.err.DefaultErrorHandler] (Camel (camel-1) thread #0 - timer://test) Failed delivery for (MessageId: 871E4623819E4FB-000000000000001B on ExchangeId: 871E4623819E4FB-000000000000001B). Exhausted after delivery attempt: 1 caught: Client is not connected (32104)
Message History (complete message history is disabled)
---------------------------------------------------------------------------------------------------------------------------------------
RouteId ProcessorId Processor Elapsed (ms)
[route1 ] [route1 ] [from[timer://test?period=1000] ] [ 0]
...
[route1 ] [to1 ] [paho:test ] [ 0]
Stacktrace
---------------------------------------------------------------------------------------------------------------------------------------
: Client is not connected (32104)
at org.eclipse.paho.mqttv5.client.internal.ExceptionHelper.createMqttException(ExceptionHelper.java:32)
at org.eclipse.paho.mqttv5.client.internal.ClientComms.sendNoWait(ClientComms.java:231)
at org.eclipse.paho.mqttv5.client.MqttAsyncClient.publish(MqttAsyncClient.java:1530)
at org.eclipse.paho.mqttv5.client.MqttClient.publish(MqttClient.java:564)
at org.apache.camel.component.paho.mqtt5.PahoMqtt5Producer.process(PahoMqtt5Producer.java:55)
at org.apache.camel.support.AsyncProcessorConverterHelper$ProcessorToAsyncProcessorBridge.process(AsyncProcessorConverterHelper.java:66)
at org.apache.camel.processor.SendProcessor.process(SendProcessor.java:172)
at org.apache.camel.processor.errorhandler.RedeliveryErrorHandler$SimpleTask.run(RedeliveryErrorHandler.java:463)
at org.apache.camel.impl.engine.DefaultReactiveExecutor$Worker.schedule(DefaultReactiveExecutor.java:179)
at org.apache.camel.impl.engine.DefaultReactiveExecutor.scheduleMain(DefaultReactiveExecutor.java:64)
at org.apache.camel.processor.Pipeline.process(Pipeline.java:184)
at org.apache.camel.impl.engine.CamelInternalProcessor.process(CamelInternalProcessor.java:398)
at org.apache.camel.component.timer.TimerConsumer.sendTimerExchange(TimerConsumer.java:210)
at org.apache.camel.component.timer.TimerConsumer$1.run(TimerConsumer.java:76)
at java.base/java.util.TimerThread.mainLoop(Timer.java:556)
at java.base/java.util.TimerThread.run(Timer.java:506)
Here is my code which is throwing exception
#ApplicationScoped
class TestRouter : RouteBuilder() {
override fun configure() {
val mqtt5Component = PahoMqtt5Component()
mqtt5Component.configuration = PahoMqtt5Configuration().apply {
brokerUrl = "tcp://192.168.99.101:1883"
clientId = "paho123"
isCleanStart = true
}
context.addComponent("paho-mqtt5", mqtt5Component)
from("timer:test?period=1000").setBody(constant("Testing timer2")).to("paho-mqtt5:test")
from("paho-mqtt5:test").process { e ->
val body = (e.`in`?.body as? ByteArray)?.let { String(it) }
println("test body 1 => $body")
}
}
}
#William, this is expected behavior
The message broker uses the client id to differentiate between clients so it can perform housekeeping for a client connection that is no longer used
In addition, a client may have a "Last Will and Testament" that the broker keeps track of
It is acceptable to append a random number to the end of your current 'clientId' since it is likely no one but you will care about this
If you have access to the individuals login, you could use that as well but you would still want to make each session unique in case they run multiple sessions
Maybe I don't understand what your problem is
Each client must have a unique Id
What are you observing that makes you think that it is creating multiple connections for a single client?
Is there a chance you are opening multiple windows and each is generating a different clientId?
This is a good way to diagnose issues by monitoring what the server is seeing
My paho-mqtt client (Javascript) is connecting as "webclient" and I append a randome number (webclient173) to identify this client
To troubleshoot, I would suggest you close all connections on the client and monitor the log of the MQTT process
When the monitor is in place, open a connection from a client that currently has no connections
This is an example connection to my Mosquitto log file
$ tail -f /var/log/mosquitto/mosquitto.log
1635169943: No will message specified.
1635169943: Sending CONNACK to webclient173 (0, 0)
1635169943: Received SUBSCRIBE from webclient173
1635169943: testtopic (QoS 0)
1635169943: Sending SUBACK to webclient173
1635170003: Received PINGREQ from webclient173
1635170003: Sending PINGRESP to webclient173
1635170003: Received PINGREQ from webclient173
1635170003: Sending PINGRESP to webclient173
What does your log show?
I am trying to use this example to connect Nifi to Flink:
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
SiteToSiteClientConfig clientConfig = new SiteToSiteClient.Builder()
.url("http://localhost:8090/nifi")
.portName("Data for Flink")
.requestBatchCount(5)
.buildConfig();
SourceFunction<NiFiDataPacket> nifiSource = new NiFiSource(clientConfig);
DataStream<NiFiDataPacket> streamSource = env.addSource(nifiSource).setParallelism(2);
DataStream<String> dataStream = streamSource.map(new MapFunction<NiFiDataPacket, String>() {
#Override
public String map(NiFiDataPacket value) throws Exception {
return new String(value.getContent(), Charset.defaultCharset());
}
});
dataStream.print();
env.execute();
I am running Nifi as a standalone server with default properties, except these properties:
nifi.remote.input.host=localhost
nifi.remote.input.secure=false
nifi.remote.input.socket.port=8090
nifi.remote.input.http.enabled=true
The call fails each time, with following log in Nifi:
[Site-to-Site Worker Thread-24] o.a.nifi.remote.SocketRemoteSiteListener
Unable to communicate with remote instance null due to
org.apache.nifi.remote.exception.HandshakeException: Handshake
with nifi://localhost:61680 failed because the Magic Header
was not present; closing connection
Nifi version: 1.7.1, Flink version: 1.7.1
After using the nifi-toolkit I removed the custom value of nifi.remote.input.socket.port and then added transportProtocol(SiteToSiteTransportProtocol.HTTP) to my SiteToSiteClientConfig and http://localhost:8080/nifi as the URL.
The reason why I changed the port in the first place is that without specifying the protocol HTTP it will use RAW by default.
And when using the RAW protocol from Flink side, the client cannot create Transaction and prints the following warning:
Unable to refresh Remote Group's peers due to Remote instance of NiFi
is not configured to allow RAW Socket site-to-site communications
That's why I thought it was a port issue
So now with the default config of Nifi, this works as expected:
SiteToSiteClientConfig clientConfig = new SiteToSiteClient.Builder()
.url("http://localhost:8080/nifi")
.portName("portNameAsInNifi")
.transportProtocol(SiteToSiteTransportProtocol.HTTP)
.requestBatchCount(1)
.buildConfig();
I am trying to connect my ESP32 which runs using the ESP-IDF framework to MQTT. I have imported this MQTT library successfully and have set up the configuration to look like this:
static void mqtt_app_start(void)
{
const esp_mqtt_client_config_t mqtt_cfg = {
// .host = "m15.cloudmqtt.com",
.uri = "mqtt://rxarkckf:smNb81Ppfe7T#m15.cloudmqtt.com:10793", // uri in the format (username:password#domain:port)
// .host = "m15.cloudmqtt.com", // config with host, port, user, password seperated
// .port = 10793,
// .username = "rxarkckf",
// .password = "smNb81Ppfe7T",
.event_handle = mqtt_event_handler,
// .user_context = (void *)your_context
};
esp_mqtt_client_handle_t client = esp_mqtt_client_init(&mqtt_cfg);
esp_mqtt_client_start(client);
}
I call mqtt_app_start(); in my app_main function. After uploading the code my ESP-32 doesn't connect to the MQTT broker and outputs this:
␛[0;32mI (12633410) MQTT_CLIENT: Sending MQTT CONNECT message, type: 1, id: 0000␛[0m
␛[0;31mE (12633710) MQTT_CLIENT: Error network response␛[0m
␛[0;32mI (12633710) MQTT_CLIENT: Error MQTT Connected␛[0m
␛[0;32mI (12633710) MQTT_CLIENT: Reconnect after 10000 ms␛[0m
␛[0;32mI (12633710) MQTT_SAMPLE: MQTT_EVENT_DISCONNECTED␛[0m
I have double checked that the values for the host, username, password, and port are all correct. When I look at the logs on the web interface hosted at cloudmqtt.com, I can see this output:
2018-11-17 03:50:53: New connection from 73.94.66.49 on port 10793.
2018-11-17 03:50:53: Invalid protocol "MQIs�" in CONNECT from 73.94.66.49.
2018-11-17 03:50:53: Socket error on client <unknown>, disconnecting.
2018-11-17 03:51:20: New connection from 73.94.66.49 on port 10793.
I had similar experience using mosquitto.
Adding this line to mqtt_config.h made my mqtt working.
#define CONFIG_MQTT_PROTOCOL_311
I think the more correct way to set this configuration is in sdkconfig.h, either manually or using "make menuconfig"
The problem is very simple. The library you are using implements the MQTT 3.1 protocol. The server you are trying to connect to implements the MQTT 3.1.1 protocol or higher.
As specified in the document (https://www.oasis-open.org/committees/download.php/55095/mqtt-diffs-v1.0-wd01.doc):
4.1 Protocol Name
The Protocol Name is present in the variable header of a MQTT CONNECT control packet. The Protocol Name is a UTF-8 encoded
string. In MQTT 3.1 the protocol name is "MQISDP". In MQTT 3.1.1 the
protocol name is represented as "MQTT".
For technical info:
https://mqtt.org/mqtt-specification/
So well, I am trying to get a MQXAQueueConnectionFactory to work, I have created a extended class from the JmsComponent to handle username and password when sending data to the queue.
It does get/put messages on the queue, but in my case I've created a router to test the XA such as
from("wmq:queue:incomingQueue")
.process(new Processor(){
... Thread.sleep(20000)
})
.to("wmq:queue:outgoingQueue")
while being in sleep, I shut down the queuemanager. However when trying to get uncommited messages from the queue
DISPLAY QSTATUS('qChainQueue') i get CURDEPTH(0), while it should be 1 as I understand the XA part.
Am I doing this totally wrong?
How can it be tested?
HelpClass to handle WMQ:
public class WMQComponent extends JmsComponent {
private final String username;
private final String password;
public WMQComponent(String hostname, int port, String username, String password,
String queueManager, String channel) throws JMSException {
super();
this.username = username;
this.password = password;
MQXAQueueConnectionFactory connectionFactory = new MQXAQueueConnectionFactory();
connectionFactory.setTransportType(JMSC.MQJMS_TP_CLIENT_MQ_TCPIP);
connectionFactory.setFailIfQuiesce(1);
connectionFactory.setHostName(hostname);
connectionFactory.setPort(port);
connectionFactory.setQueueManager(queueManager);
connectionFactory.setChannel(channel);
setConnectionFactory(connectionFactory);
}
#Override
public Endpoint createEndpoint(String uri) throws Exception {
if (uri.contains("username") || uri.contains("password")) {
throw new IllegalStateException("Username and password is set by the component");
}
if (uri.contains("?")) {
return super.createEndpoint(uri + "&username=" + username + "&password=" + password);
} else {
return super.createEndpoint(uri + "?username=" + username + "&password=" + password);
}
}
}
With the following errors:
2015-03-25 14:01:12,077 [ #2 - Multicast] INFO dest_chain_ldap - org.springframework.jms.IllegalStateException: JMSWMQ0018: Failed to connect to queue manager 'QMBATCHESB' with connection mode 'Client' and host name 'hostname.com'.; nested exception is com.ibm.msg.client.jms.DetailedIllegalStateException: JMSWMQ0018: Failed to connect to queue manager 'QMBATCHESB' with connection mode 'Client' and host name 'hostname.com'. Check the queue manager is started and if running in client mode, check there is a listener running. Please see the linked exception for more information.; nested exception is com.ibm.mq.MQException: JMSCMQ0001: WebSphere MQ call failed with compcode '2' ('MQCC_FAILED') reason '2059' ('MQRC_Q_MGR_NOT_AVAILABLE').
at org.springframework.jms.support.JmsUtils.convertJmsAccessException(JmsUtils.java:279)
at org.springframework.jms.support.JmsAccessor.convertJmsAccessException(JmsAccessor.java:168)
at org.springframework.jms.core.JmsTemplate.execute(JmsTemplate.java:469)
at org.apache.camel.component.jms.JmsConfiguration$CamelJmsTemplate.send(JmsConfiguration.java:228)
at org.apache.camel.component.jms.JmsProducer.doSend(JmsProducer.java:431)
at org.apache.camel.component.jms.JmsProducer.processInOnly(JmsProducer.java:385)
at org.apache.camel.component.jms.JmsProducer.process(JmsProducer.java:153)
at org.apache.camel.processor.SendProcessor.process(SendProcessor.java:120)
at org.apache.camel.management.InstrumentationProcessor.process(InstrumentationProcessor.java:72)
at org.apache.camel.processor.interceptor.TraceInterceptor.process(TraceInterceptor.java:163)
at org.apache.camel.processor.RedeliveryErrorHandler.process(RedeliveryErrorHandler.java:416)
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:191)
at org.apache.camel.processor.Pipeline.process(Pipeline.java:118)
at org.apache.camel.processor.Pipeline.process(Pipeline.java:80)
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:191)
at org.apache.camel.component.direct.DirectProducer.process(DirectProducer.java:51)
at org.apache.camel.processor.SendProcessor.process(SendProcessor.java:120)
at org.apache.camel.management.InstrumentationProcessor.process(InstrumentationProcessor.java:72)
at org.apache.camel.processor.interceptor.TraceInterceptor.process(TraceInterceptor.java:163)
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:191)
at org.apache.camel.processor.RedeliveryErrorHandler.process(RedeliveryErrorHandler.java:416)
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:191)
at org.apache.camel.util.AsyncProcessorHelper.process(AsyncProcessorHelper.java:105)
at org.apache.camel.processor.MulticastProcessor.doProcessParallel(MulticastProcessor.java:732)
at org.apache.camel.processor.MulticastProcessor.access$200(MulticastProcessor.java:82)
at org.apache.camel.processor.MulticastProcessor$1.call(MulticastProcessor.java:303)
at org.apache.camel.processor.MulticastProcessor$1.call(MulticastProcessor.java:288)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
Caused by: com.ibm.msg.client.jms.DetailedIllegalStateException: JMSWMQ0018: Failed to connect to queue manager 'QMBATCHESB' with connection mode 'Client' and host name 'hostname.com'. Check the queue manager is started and if running in client mode, check there is a listener running. Please see the linked exception for more information.
at com.ibm.msg.client.wmq.common.internal.Reason.reasonToException(Reason.java:496)
at com.ibm.msg.client.wmq.common.internal.Reason.createException(Reason.java:236)
at com.ibm.msg.client.wmq.internal.WMQConnection.<init>(WMQConnection.java:430)
at com.ibm.msg.client.wmq.internal.WMQXAConnection.<init>(WMQXAConnection.java:70)
at com.ibm.msg.client.wmq.factories.WMQXAConnectionFactory.createV7ProviderConnection(WMQXAConnectionFactory.java:190)
at com.ibm.msg.client.wmq.factories.WMQConnectionFactory.createProviderConnection(WMQConnectionFactory.java:6210)
at com.ibm.msg.client.jms.admin.JmsConnectionFactoryImpl.createConnection(JmsConnectionFactoryImpl.java:278)
at com.ibm.mq.jms.MQConnectionFactory.createCommonConnection(MQConnectionFactory.java:6155)
at com.ibm.mq.jms.MQQueueConnectionFactory.createQueueConnection(MQQueueConnectionFactory.java:144)
at com.ibm.mq.jms.MQQueueConnectionFactory.createConnection(MQQueueConnectionFactory.java:223)
at org.springframework.jms.connection.UserCredentialsConnectionFactoryAdapter.doCreateConnection(UserCredentialsConnectionFactoryAdapter.java:175)
at org.springframework.jms.connection.UserCredentialsConnectionFactoryAdapter.createConnection(UserCredentialsConnectionFactoryAdapter.java:150)
at org.springframework.jms.support.JmsAccessor.createConnection(JmsAccessor.java:184)
at org.springframework.jms.core.JmsTemplate.execute(JmsTemplate.java:456)
... 29 more
Caused by: com.ibm.mq.MQException: JMSCMQ0001: WebSphere MQ call failed with compcode '2' ('MQCC_FAILED') reason '2059' ('MQRC_Q_MGR_NOT_AVAILABLE').
at com.ibm.msg.client.wmq.common.internal.Reason.createException(Reason.java:223)
... 41 more
Caused by: com.ibm.mq.jmqi.JmqiException: CC=2;RC=2059;AMQ9204: Connection to host 'hostname.com(1514)' rejected. [1=com.ibm.mq.jmqi.JmqiException[CC=2;RC=2059;AMQ9213: A communications error for occurred. [1=java.net.ConnectException[Connection refused: connect],3=hostname.com]],3=hostname.com(1514),5=RemoteTCPConnection.connnectUsingLocalAddress]
at com.ibm.mq.jmqi.remote.internal.RemoteFAP.jmqiConnect(RemoteFAP.java:1831)
at com.ibm.msg.client.wmq.internal.WMQConnection.<init>(WMQConnection.java:345)
... 40 more
Caused by: com.ibm.mq.jmqi.JmqiException: CC=2;RC=2059;AMQ9213: A communications error for occurred. [1=java.net.ConnectException[Connection refused: connect],3=hostname.com]
at com.ibm.mq.jmqi.remote.internal.RemoteTCPConnection.connnectUsingLocalAddress(RemoteTCPConnection.java:612)
at com.ibm.mq.jmqi.remote.internal.RemoteTCPConnection.protocolConnect(RemoteTCPConnection.java:940)
at com.ibm.mq.jmqi.remote.internal.system.RemoteConnection.connect(RemoteConnection.java:1097)
at com.ibm.mq.jmqi.remote.internal.system.RemoteConnectionPool.getConnection(RemoteConnectionPool.java:348)
at com.ibm.mq.jmqi.remote.internal.RemoteFAP.jmqiConnect(RemoteFAP.java:1503)
... 41 more
Caused by: java.net.ConnectException: Connection refused: connect
at java.net.DualStackPlainSocketImpl.connect0(Native Method)
at java.net.DualStackPlainSocketImpl.socketConnect(DualStackPlainSocketImpl.java:69)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:157)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391)
at java.net.Socket.connect(Socket.java:579)
at java.net.Socket.connect(Socket.java:528)
at com.ibm.mq.jmqi.remote.internal.RemoteTCPConnection$2.run(RemoteTCPConnection.java:597)
at java.security.AccessController.doPrivileged(Native Method)
at com.ibm.mq.jmqi.remote.internal.RemoteTCPConnection.connnectUsingLocalAddress(RemoteTCPConnection.java:588)
... 45 more
The reason code 2059 and various errors stating that the connection was refused suggest either a mechanical issue (i.e. Listener not running) or an auths issue.
If I were attempting to debug this, the first thing I'd do is to enable authorization events, channel events, and any others you would normally enable. If you use MQ Explorer, also install the MS0P Plugin which will allow you to view the event messages in human-readable text.
Next, I would use the MQ sample programs to test. Since I always install the full client rather than grabbing the jar files, I have amqsputc available. However, the Java classes have IVT (initial verification test) programs. These ensure that the listener is running, the channel is configured and available, etc. As of v7.1 this also ensures that the CHLAUTH rules are set to allow the access. As of v8.0, or if you had the Capitalware exit installed, this also lets us test the user ID and password authentication.
The queue manager's error log and the event messages should provide good diagnostics, assuming the connection request makes it to MQ. Be sure to look both in the QMgr-specific error logs and the installation-global error logs.
Once I had confirmed that basic connectivity is in place, I'd reconcile my client-side configuration parameters for host, port, channel and if it is specified [shudder!] the QMgr name. Assuming these are correct and having proven basic connectivity works, it is now possible to test the app with some confidence.
The same method applies. First make sure the app's connection request makes it to the QMgr. If it does and is refused, the event messages and error logs will note this and why. If there is no indication of a failure in these places, the app isn't getting to the QMgr. The 2059 can indicate that the socket was refused, that the listener is up but the QMgr is not, that the channel instances have maxed out, or that after provisionally starting the channel it was closed by the QMgr, often due to a CHLAUTH rule. In any case, the event messages and error logs will have a detailed explanation as to why.
So I had done this a bit wrong, it was not enough to use the MQXAConnectionFactory but I had to create the JmsComponent as transacted.
Have tried to stop the queue manager while running the application and stop the application while handling a message and it seems to do the rollback as expected.
Ended up with
public static JmsComponent mqXAComponentTransacted(String hostname, int port, String username, String password,
String queueManager, String channel) throws JMSException {
MQXAQueueConnectionFactory connectionFactory = new MQXAQueueConnectionFactory();
connectionFactory.setTransportType(JMSC.MQJMS_TP_CLIENT_MQ_TCPIP);
connectionFactory.setFailIfQuiesce(1);
connectionFactory.setHostName(hostname);
connectionFactory.setPort(port);
connectionFactory.setQueueManager(queueManager);
connectionFactory.setChannel(channel);
UserCredentialsConnectionFactoryAdapter connectionFactoryAdapter=new UserCredentialsConnectionFactoryAdapter();
connectionFactoryAdapter.setTargetConnectionFactory(connectionFactory);
connectionFactoryAdapter.setUsername(username);
connectionFactoryAdapter.setPassword(password);
return JmsComponent.jmsComponentTransacted(connectionFactoryAdapter);
}
Also using the UserCredentialsConnectionFactoryAdapter, I didn't want to use Spring components but since the Jms package is already dependent of it, it was easier to use it than my previous solution to handle credentials.
The kafka server is configured with a path following the port number (from the server.properties)
zookeeper.connect=xxxxx007:2181/kafka
The java producer code:
Properties props = new Properties();
props.put("serializer.class", "kafka.serializer.StringEncoder");
props.put("metadata.broker.list", "xxxxx007:9092");
The producer populates the topic if the broker omits /kafka
The producer gets numberFormatException when the broker contains /kafa
Properties props = new Properties();
props.put("serializer.class", "kafka.serializer.StringEncoder");
props.put("metadata.broker.list", "xxxxx007:9092/kafka");
java.lang.NumberFormatException: For input string: "9092/kafka"
The java consumer hangs (returns no data) if the zookeeper connection contains /kafka
Properties props = new Properties();
props.put("zookeeper.connect", "xxxxx007:2181/kafka");
The java consumer gets exception if the zookeeper connection omits /kakfa
Properties props = new Properties();
props.put("zookeeper.connect", "xxxxx007:2181");
Exception in thread "main" kafka.common.ConsumerRebalanceFailedException: group1_BFTSLBHW0000RGU-1397591737558-f75b6658 can't rebalance after 4 retries
at kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener.syncedRebalance(ZookeeperConsumerConnector.scala:428)
at kafka.consumer.ZookeeperConsumerConnector.kafka$consumer$ZookeeperConsumerConnector$$reinitializeConsumer(ZookeeperConsumerConnector.scala:718)
at kafka.consumer.ZookeeperConsumerConnector.consume(ZookeeperConsumerConnector.scala:209)
at kafka.javaapi.consumer.ZookeeperConsumerConnector.createMessageStreams(ZookeeperConsumerConnector.scala:80)
at kafka.javaapi.consumer.ZookeeperConsumerConnector.createMessageStreams(ZookeeperConsumerConnector.scala:92)
at kafka.examples.TitaniumConsumer.main(TitaniumConsumer.java:73)
The intention behind specifying this zookeeper path is so that for a particular cluster all data available in kafka appear under this particular path.
Note that you must create this path yourself prior to starting the broker and consumers must use the same connection string.