Apache Camel route with xa transaction, multiple brokers & control bus - apache-camel

I have 2 brokers hosted on different physical machines. I need to write camel route which will pick up message from inbound queue of broker-1 & send it to outbound queue configured in broker-2. Now if in any case broker-2 goes down then the traffic should be routed to a 3rd broker. As this will be a distributed transaction so I guess XA transaction(Springboot Atomikos) need to be used & for traffic diversion to 3rd broker, control bus eip to be used. But as I am new to camel so not sure how to do this..can anyone please guide me?
POM
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-core</artifactId>
<version>3.7.0</version>
</dependency>
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-jms</artifactId>
<version>3.7.0</version>
</dependency>
<!-- <dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-resilience4j</artifactId>
<version>3.4.5</version>use the same version as your Camel core version
</dependency> -->
<dependency>
<groupId>org.apache.camel.springboot</groupId>
<artifactId>camel-resilience4j-starter</artifactId>
<version>3.7.0</version>
</dependency>
<dependency>
<groupId>org.apache.activemq</groupId>
<artifactId>activemq-camel</artifactId>
</dependency>
<dependency>
<groupId>org.apache.activemq</groupId>
<artifactId>activemq-broker</artifactId>
</dependency>
<dependency>
<groupId>org.apache.activemq</groupId>
<artifactId>activemq-client</artifactId>
</dependency>
<dependency>
<groupId>org.apache.activemq</groupId>
<artifactId>activemq-pool</artifactId>
</dependency>
<!-- https://mvnrepository.com/artifact/org.springframework.boot/spring-boot-starter-jta-atomikos -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-jta-atomikos</artifactId>
</dependency>
Config
#Configuration
public class JMSConfigManager {
#Bean(name = "activemq1")
public ActiveMQComponent createComponent1(ConnectionFactory factory, JtaTransactionManager jtaTransactionManager) {
ActiveMQComponent activeMQComponent = new ActiveMQComponent();
activeMQComponent.setConnectionFactory(jmsConnectionFactory1());
activeMQComponent.setTransactionManager(jtaTransactionManager);
//activeMQComponent.setLazyCreateTransactionManager(false);
activeMQComponent.setCacheLevelName("CACHE_CONSUMER");
activeMQComponent.setTransacted(false);
activeMQComponent.setDeliveryPersistent(true);
//activeMQComponent.setTransactionName("PROPAGATION_REQUIRED");
activeMQComponent.setAcknowledgementMode(JmsProperties.AcknowledgeMode.CLIENT.getMode());
return activeMQComponent;
}
#Bean(name = "activemq2")
public ActiveMQComponent createComponent2(ConnectionFactory factory, JtaTransactionManager jtaTransactionManager) {
ActiveMQComponent activeMQComponent = new ActiveMQComponent();
activeMQComponent.setConnectionFactory(jmsConnectionFactory2());
//activeMQComponent.setLazyCreateTransactionManager(false);
activeMQComponent.setCacheLevelName("CACHE_CONSUMER");
activeMQComponent.setTransactionManager(jtaTransactionManager);
activeMQComponent.setTransacted(false);
activeMQComponent.setDeliveryPersistent(true);
//activeMQComponent.setTransactionName("PROPAGATION_REQUIRED");
activeMQComponent.setAcknowledgementMode(JmsProperties.AcknowledgeMode.CLIENT.getMode());
return activeMQComponent;
}
#Primary
#Bean(name = "cf1") public ConnectionFactory jmsConnectionFactory1() {
ActiveMQConnectionFactory connectionFactory = new
ActiveMQConnectionFactory();
connectionFactory.setBrokerURL("tcp://localhost:61616");
return connectionFactory;
}
#Bean(name = "cf2") public ConnectionFactory jmsConnectionFactory2() {
ActiveMQConnectionFactory connectionFactory = new
ActiveMQConnectionFactory();
connectionFactory.setBrokerURL("tcp://localhost:61617");
return connectionFactory;
}
Policy
#Configuration
public class TransactionConfig {
#Bean("policyPropagationRequired")
public SpringTransactionPolicy transactionPolicyPropagationRequired(
#Autowired JtaTransactionManager transactionmanager) {
SpringTransactionPolicy policy = new SpringTransactionPolicy(transactionmanager);
policy.setPropagationBehaviorName("PROPAGATION_REQUIRED");
return policy;
}
}
Route
#Override
public void configure() throws Exception {
System.out.println("Test-1");
from("jms:INBOUND.Q?connectionFactory=cf1")
.transacted("policyPropagationRequired")
//.log(LoggingLevel.INFO, log, "******Inbound messages Received")
.to("jms:OUTBOUND.Q1?connectionFactory=cf2")
.end();
}

As this will be a distributed transaction so I guess XA transaction(Springboot Atomikos) need to be used
It depends:
If you want a "water-proof" end-to-end-transaction between the brokers, you need to use XA-transactions
If you are OK with a "duplicates-possible transaction", you can simplify the setup and only use local consumer transaction of the broker
To clarify the last point: if you consume with local broker transaction, Camel does not commit the message on the consumer until the route is successfully processed. So if any error occurs, a rollback would happen and the message would be redelivered.
The consequence is an edge-case where a message can be successfully sent to the destination broker, but Camel is no more able to commit against the source broker. Then a redelivery occurs, the route is processed one more time and the same message is delivered two (or more) times.
So the choice is to either use XA transactions or consumer transactions with an idempotent consumer (that compensates the mentioned edge case).
for traffic diversion to 3rd broker, control bus eip to be used
Can't you simply use the Camel error handling to route to broker 3 instead of broker 2?
And another strategy would be to build some kind of broker cluster (real cluster, network of brokers or whatever) that encapsulates the failover from broker 2 to broker 3.

Related

NoSuchMethodError in apache camel java

I ran the following code:
package com.dinesh.example4;
import javax.jms.ConnectionFactory;
//import org.apache.camel.support.HeaderFilterStrategyComponent;
import org.apache.activemq.ActiveMQConnectionFactory;
import org.apache.camel.CamelContext;
import org.apache.camel.Component;
import org.apache.camel.builder.RouteBuilder;
import org.apache.camel.component.jms.JmsComponent;
import org.apache.camel.impl.DefaultCamelContext;
//import org.apache.camel.impl.*;
public class FileToActiveMQ {
public static void main(String[] args) throws Exception {
CamelContext context = new DefaultCamelContext();
ConnectionFactory connectionFactory = new ActiveMQConnectionFactory();
context.addComponent("jms",JmsComponent.jmsComponentAutoAcknowledge(connectionFactory));
context.addRoutes(new RouteBuilder() {
#Override
public void configure() throws Exception {
from("file:input_box?noop=true")
.to("activemq:queue:my_queue");
}
});
while(true)
context.start();
}
}
for transforming data from input_box folder to activemq.
I am getting the following error:
Exception in thread "main" java.lang.NoSuchMethodError: org.apache.camel.impl.HeaderFilterStrategyComponent.<init>(Ljava/lang/Class;)V
at org.apache.camel.component.jms.JmsComponent.<init>(JmsComponent.java:71)
at org.apache.camel.component.jms.JmsComponent.<init>(JmsComponent.java:87)
at org.apache.camel.component.jms.JmsComponent.jmsComponent(JmsComponent.java:102)
at org.apache.camel.component.jms.JmsComponent.jmsComponentAutoAcknowledge(JmsComponent.java:127)
at com.dinesh.example4.FileToActiveMQ.main(FileToActiveMQ.java:18)
Here 18 th line in above code:
context.addRoutes(new RouteBuilder() {
Pom.xml:
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.dinesh</groupId>
<artifactId>camel-application1</artifactId>
<version>0.0.1-SNAPSHOT</version>
<dependencies>
<!-- https://mvnrepository.com/artifact/org.apache.camel/camel-core -->
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-core</artifactId>
<version>2.14.4</version>
</dependency>
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-jms</artifactId>
<version>2.24.0</version>
</dependency>
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-activemq</artifactId>
<version>3.0.0</version>
</dependency>
</dependencies>
</project>
Please help.
According to your POM you mix 3 different Camel versions: 2.14.4, 2.24.0 and 3.0.0.
You have to use the same version for all Camel components, as #claus-ibsen already commented.
Do it for example like the example below (with properties for the framework versions and use it in all its dependencies).
However, as Sneharghya already answered, Camel 2.x has no camel-activemq but instead can use the dependency activemq-camel of ActiveMQ.
Therefore the POM should look like this. But I think the Camel version can vary.
<properties>
<amq.version>5.15.4</amq.version>
<camel.version>2.19.5</camel.version>
</properties>
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-core</artifactId>
<version>${camel.version}</version>
</dependency>
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-jms</artifactId>
<version>${camel.version}</version>
</dependency>
<dependency>
<groupId>org.apache.activemq</groupId>
<artifactId>activemq-camel</artifactId>
<version>${amq.version}</version>
</dependency>
You can also use Maven dependencyManagement.
camel-activemq is not available for versions older than 3.0.
If you want to keep using camel 2.24.3, then remove the camel-activemq dependency from your pom file and add
<dependency>
<groupId>org.apache.activemq</groupId>
<artifactId>activemq-camel</artifactId>
<version>5.15.13</version>
</dependency>
1. Update the class
There are two issues.
One you were registering your component in one name jms and send the message to different component activemq. They should be same.
You were doing a while loop and instead the while loop starting the context many times.
public class FileToActiveMQ {
public static void main(String[] args) throws Exception {
CamelContext context = new DefaultCamelContext();
ConnectionFactory connectionFactory = new ActiveMQConnectionFactory();
context.addComponent("activemq",
JmsComponent.jmsComponentAutoAcknowledge(connectionFactory));
context.addRoutes(new RouteBuilder() {
#Override
public void configure() throws Exception {
from("file:input_box?noop=true")
.to("activemq:queue:my_queue");
}
});
context.start();
while(true) {
Thread.sleep(10000);
}
}
}
2. Replace the dependencies in pom as follows:
<dependencies>
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-core</artifactId>
<version>2.25.1</version>
</dependency>
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-jms</artifactId>
<version>2.25.1</version>
</dependency>
<dependency>
<groupId>org.apache.activemq</groupId>
<artifactId>activemq-client</artifactId>
<version>5.15.7</version>
</dependency>
<dependency>
<groupId>org.apache.activemq</groupId>
<artifactId>activemq-camel</artifactId>
<version>5.15.7</version>
</dependency>
<dependency>
<groupId>org.apache.activemq</groupId>
<artifactId>activemq-pool</artifactId>
<version>5.15.7</version>
</dependency>
</dependencies>

kafka flink connection error shows NoSuchMethodError

new error appeared when i change from flinkkafkaconsumer09 to flinkkafkaconsumer
Flink code:
import org.apache.flink.api.common.functions.MapFunction;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer;
import org.apache.flink.streaming.util.serialization.SimpleStringSchema;
import java.util.Properties;
#SuppressWarnings("deprecation")
public class ReadFromKafka {
public static void main(String[] args) throws Exception {
// create execution environment
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
Properties properties = new Properties();
properties.setProperty("bootstrap.servers", "localhost:9092");
properties.setProperty("group.id", "test-consumer-group");
DataStream<String> stream = env
.addSource(new FlinkKafkaConsumer<String>("test4", new SimpleStringSchema(), properties));
stream.map(new MapFunction<String, String>() {
private static final long serialVersionUID = -6867736771747690202L;
#Override
public String map(String value) throws Exception {
return "Stream Value: " + value;
}
}).print();
env.execute();
}
}
ERROR:
log4j:WARN No appenders could be found for logger (org.apache.flink.api.java.ClosureCleaner).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Exception in thread "main" org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
at org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:146)
at org.apache.flink.runtime.minicluster.MiniCluster.executeJobBlocking(MiniCluster.java:626)
at org.apache.flink.streaming.api.environment.LocalStreamEnvironment.execute(LocalStreamEnvironment.java:117)
at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1507)
at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1489)
at ReadFromKafka.main(ReadFromKafka.java:33)
Caused by: org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata
pom.xml
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.dataartisans</groupId>
<artifactId>kafka-example</artifactId>
<version>0.0.1-SNAPSHOT</version>
<name>kafkaex</name>
<description>this is flink kafka example</description>
<dependencies>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-java</artifactId>
<version>1.9.1</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-streaming-java_2.12</artifactId>
<version>1.9.1</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-clients_2.12</artifactId>
<version>1.9.1</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-connector-kafka_2.12</artifactId>
<version>1.9.1</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-core</artifactId>
<version>1.9.1</version>
</dependency>
<dependency>
<groupId>com.googlecode.json-simple</groupId>
<artifactId>json-simple</artifactId>
<version>1.1</version>
</dependency>
</dependencies>
</project>
flink-connector-kafka_2.12 isn't compatible with FlinkKafkaConsumer09.
flink-connector-kafka_2.12 is a "universal" kafka connector, compiled for use with Scala 2.12. This universal connector can be used with any version of Kafka from 0.11.0 onward.
FlinkKafkaConsumer09 is for use with Kafka 0.9.x. If your Kafka broker is running Kafka 0.9.x, then you will need flink-connector-kafka-0.9_2.11 or flink-connector-kafka-0.9_2.12, depending on which version of Scala you want.
On the other hand, if your Kafka broker is running a recent version of Kafka (0.11.0 or newer), then stick with flink-connector-kafka_2.12 and use FlinkKafkaConsumer instead of FlinkKafkaConsumer09.
See the documentation for more info.

Issue with Batch Table API in Flink 1.5 - Complains of need of Streaming API

I'm trying to create a Batch oriented Flink job, with Flink 1.5.0 and wish to use the Table and SQL APIs to process the data. My problem is trying to create the BatchTableEnviroment I get a compiling error
BatchJob.java:[46,73] cannot access org.apache.flink.streaming.api.environment.StreamExecutionEnvironment
caused at
final BatchTableEnvironment bTableEnv = TableEnvironment.getTableEnvironment(bEnv);
As far as I know I have no dependency on the streaming environment.
My code is as the snippet below.
import org.apache.flink.api.common.typeinfo.TypeInformation;
import org.apache.flink.api.java.ExecutionEnvironment;
import org.apache.flink.table.api.Table;
import org.apache.flink.table.api.TableEnvironment;
import org.apache.flink.table.api.java.BatchTableEnvironment;
import org.apache.flink.table.sources.CsvTableSource;
import org.apache.flink.table.sources.TableSource;
import java.util.Date;
public class BatchJob {
public static void main(String[] args) throws Exception {
final ExecutionEnvironment bEnv = ExecutionEnvironment.getExecutionEnvironment();
// create a TableEnvironment for batch queries
final BatchTableEnvironment bTableEnv = TableEnvironment.getTableEnvironment(bEnv);
... do stuff
// execute program
bEnv.execute("MY Batch Jon");
}
My pom dependencies are as as below
<dependencies>
<!-- Apache Flink dependencies -->
<!-- These dependencies are provided, because they should not be packaged into the JAR file. -->
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-java</artifactId>
<version>${flink.version}</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-table_2.11</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-scala_2.11</artifactId>
<version>${flink.version}</version>
</dependency>
<!-- Add connector dependencies here. They must be in the default scope (compile). -->
<!-- Example:
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-connector-kafka-0.10_${scala.binary.version}</artifactId>
<version>${flink.version}</version>
</dependency>
-->
<!-- Add logging framework, to produce console output when running in the IDE. -->
<!-- These dependencies are excluded from the application JAR by default. -->
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
<version>1.7.7</version>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
<version>1.2.17</version>
<scope>runtime</scope>
</dependency>
</dependencies>
Please can someone help me understand what the dependency of the Streaming API is and why I need it for a batch job?
Thanks very much in advance for your help.
Oliver
Flink's Table API and SQL support are unified APIs for batch and stream processing. Many internal classes are shared between batch and stream execution and Scala / Java Table API and SQL and hence link to Flink's batch and streaming dependencies.
Due to these common classes, also batch queries require on the flink-streaming dependencies.

Apache Flink- Class file for org.apache.flink.streaming.api.scala.DataStream not found

Using Apache Flink version 1.3.2 and Cassandra 3.11, I wrote a simple code to write data into Cassandra using Apache Flink Cassandra connector. The following is the code:
final Collection<String> collection = new ArrayList<>(50);
for (int i = 1; i <= 50; ++i) {
collection.add("element " + i);
}
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
DataStream<Tuple2<UUID, String>> dataStream = env
.fromCollection(collection)
.map(new MapFunction<String, Tuple2<UUID, String>>() {
final String mapped = " mapped ";
String[] splitted;
#Override
public Tuple2<UUID, String> map(String s) throws Exception {
splitted = s.split("\\s+");
return new Tuple2(
UUID.randomUUID(),
splitted[0] + mapped + splitted[1]
);
}
});
dataStream.print();
CassandraSink.addSink(dataStream)
.setQuery("INSERT INTO test.phases (id, text) values (?, ?);")
.setHost("127.0.0.1")
.build();
env.execute();
Trying to run the same code using Apache Flink 1.4.2 (1.4.x), I got the error:
Error:(36, 22) java: cannot access org.apache.flink.streaming.api.scala.DataStream
class file for org.apache.flink.streaming.api.scala.DataStream not found
on the line
CassandraSink.addSink(dataStream)
.setQuery("INSERT INTO test.phases (id, text) values (?, ?);")
.setHost("127.0.0.1")
.build();
I think we have some dependency changes in Apache Flink 1.4.2 and it causes the problem.
I use the following dependencies imported in the code:
import org.apache.flink.api.common.functions.MapFunction;
import org.apache.flink.api.java.tuple.Tuple2;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.connectors.cassandra.CassandraSink;
How can I solve the error in Apache Flink version 1.4.2?
Update:
In Flink 1.3.2, the class org.apache.flink.streaming.api.scala.DataStream<T> is in Java documents, but in version 1.4.2 there is no such class. see here
I tried the code example in Flink 1.4.2 documents for Cassandra connector but I got the same error, but the example worked with Flink 1.3.2 dependencies!
Besides all other dependencies make sure you have the Flink Scala dependency:
Maven
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-streaming-scala_2.11</artifactId>
<version>1.4.2</version>
</dependency>
Gradle
dependencies {
compile group: 'org.apache.flink', name: 'flink-streaming-scala_2.11', version: '1.4.2'
..
}
I managed to get your example working with the following dependencies:
import org.apache.flink.api.common.functions.MapFunction;
import org.apache.flink.api.java.tuple.Tuple2;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.connectors.cassandra.CassandraSink;
Maven
<dependencies>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-java</artifactId>
<version>1.4.2</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-streaming-java_2.11</artifactId>
<version>1.4.2</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-streaming-scala_2.11</artifactId>
<version>1.4.2</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-clients_2.11</artifactId>
<version>1.4.2</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-connector-cassandra_2.11</artifactId>
<version>1.4.2</version>
</dependency>
</dependencies>

Selenium issue: UnreachableBrowserException for driver.get().manage().getCookieNamed("JSESSIONID")

I am trying to capture JSESSION ID in #beforeClass
Cookie oldCookie =
driver.get().manage().getCookieNamed("JSESSIONID");
ConsoleLog.info("COOKIE VALUE = " +
oldCookie.getValue());
but the script is failing with
org.openqa.selenium.remote.UnreachableBrowserExcepti>on: Error communicating with the remote browser. It may >have died.
I have updated gson version and the issue is fixed
<dependency>
<groupId>com.google.code.gson</groupId>
<artifactId>gson</artifactId>
<version>2.7</version>
</dependency>

Resources