Kafka - Camel file read and write error - apache-camel

I was trying integrate Apache camel with Kafka and wrote a sample program to read a file and write to Kafka Topic. But I am getting below error while doing so. I could be able to do it the reverse way read from Kafka topic and write to a file.
Stacktrace
org.apache.kafka.common.errors.SerializationException: Can't convert value of class org.apache.camel.component.file.GenericFile to class org.apache.kafka.common.serialization.StringSerializer specified in value.serializer
[#0 - file://C:%5Cshare%5Cinput] KafkaProducer WARN No message key or partition key set
[#0 - file://C:%5Cshare%5Cinput] GenericFileOnCompletion WARN Rollback file strategy: org.apache.camel.component.file.strategy.GenericFileRenameProcessStrategy#7127845b for file: GenericFile[C:\share\input\file.txt]
[#0 - file://C:%5Cshare%5Cinput] DefaultErrorHandler ERROR Failed delivery for (MessageId: ID-L8-CWBL462-49953-1480494317350-0-21 on ExchangeId: ID-L8-CWBL462-49953-1480494317350-0-22). Exhausted after delivery attempt: 1 caught: org.apache.kafka.common.errors.SerializationException: Can't convert value of class org.apache.camel.component.file.GenericFile to class org.apache.kafka.common.serialization.StringSerializer specified in value.serializer
Code
#ContextName("myCdiCamelContext")
public class MyRoutes extends RouteBuilder {
#Inject
#Uri("file:C:\\share\\input?fileName=file.txt&noop=true")
private Endpoint inputEndpoint;
#Inject
#Uri("kafka:localhost:9092?topic=test&groupId=testing&autoOffsetReset=earliest&consumersCount=1")
private Endpoint resultEndpoint;
#Override
public void configure() throws Exception {
from(inputEndpoint)
.to(resultEndpoint);
}
}

After adding a new processor it worked for me
public void configure() throws Exception {
from(inputEndpoint).process(new Processor() {
#Override
public void process(Exchange exchange) throws Exception {
exchange.getIn().setBody(exchange.getIn().getBody(),String.class);
exchange.getIn().setHeader(KafkaConstants.PARTITION_KEY, 0);
exchange.getIn().setHeader(KafkaConstants.KEY, "1");
}
})
.to(resultEndpoint);
}

Related

Unable to run a flink application on a cluster

I have a below example flink application which I am trying to run on a cluster.
public class ClusterConnect {
public static void main(String[] args) throws Exception {
ExecutionEnvironment env = ExecutionEnvironment
.createRemoteEnvironment("X.X.X.X", 6123, "");
// get input data
DataSet<String> text = env.fromElements("To be, or not to be,--that is the question:--",
"Whether 'tis nobler in the mind to suffer", "The slings and arrows of outrageous fortune",
"Or to take arms against a sea of troubles,");
DataSet<Tuple2<String, Integer>> counts = text
.flatMap(new FlatMapFunction<String, Tuple2<String, Integer>>() {
#Override
public void flatMap(String s, Collector<Tuple2<String, Integer>> collector) throws Exception {
for (String word : s.split(" ")) {
collector.collect(new Tuple2<String, Integer>(word, 1));
}
}
})
.groupBy(0)
.sum(1);
// execute and print result
counts.print();
env.execute();
}
Cluster is setup with one jobmanager(free aws instance) and two tasks managers(free aws instances). While trying to run the above flink application on different AWS(which could reach the jobmanager, taskmanagers) hitting the following error.
Error from application:
WARN [akka.remote.ReliableDeliverySupervisor] Association with remote system [akka.tcp://flink#172.31.29.190:6123] has failed, address is now gated for [5000] ms. Reason is: [Disassociated].
Logs from cluster job manager:
016-11-30 22:00:42,796 WARN akka.remote.ReliableDeliverySupervisor - Association with remote system [akka.tcp://flink#172.31.6.190:33619] has failed, address is now gated for [5000] ms. Reason is: [scala.Option; local class incompatible: stream classdesc serialVersionUID = -2062608324514658839, local class serialVersionUID = -114498752079829388]

Apache Camel - Dead Letter Channel - enrich message

I'm using deadLetterChannel to take care of exceptions and send them to the error queue.
errorHandler(deadLetterChannel(QUEUE_ERROR).maximumRedeliveries(3).redeliveryDelay(2000));
Is it possible to enrich the message with additional message headers? Or do i have to use onException for it?
You can use onRedelivery and with a processor to add headers before redelivering
errorHandler(deadLetterChannel(QUEUE_ERROR).maximumRedeliveries(3).redeliveryDelay(2000).onRedelivery(new Processor() {
#Override
public void process(Exchange exchange) throws Exception {
//add headers here
}
}));

Camel onException doesn't catch NoMessageIdException of idempotentConsumer?

Example route:
onException(Exception.class)
.process(new Processor() {
#Override
public void process(Exchange exchange) throws Exception {
System.out.println("it works");
}
})
.handled(true);
from("jetty://http://0.0.0.0:8888/test")
.idempotentConsumer(header("myid"), MemoryIdempotentRepository.memoryIdempotentRepository(1000000))
.skipDuplicate(false)
.filter(property(Exchange.DUPLICATE_MESSAGE).isEqualTo(true))
.throwException(new DuplicateRequestException())
.end();
Sending a request to the listener URL without myid parameter throws org.apache.camel.processor.idempotent.NoMessageIdException: No message ID could be found using expression: header(myid) on message exchange: Exchange[Message: [Body is instance of org.apache.camel.StreamCache]]
without ever passing from onException.
Yes this is in fact a bug in Apache Camel. I have logged a ticket to get this fixed in the next releases.
https://issues.apache.org/jira/browse/CAMEL-7990

Mocking Camel http endpoint with resource stream in camel

I have a route like following
from(direct:start)
.process(new Processor() {
#Override
public void process(Exchange exchange) throws Exception {
exchange.setProperty("doc_url", "http://myhost:5984/test/record/doc.csv");
}
}).setHeader(Exchange.HTTP_METHOD, constant("GET"))
.convertBodyTo(String.class)
.recipientList(header("doc_url")
.split().streaming.process(new MyProcessor());
I don't want to run apache couchdb every time for testing. I want to make this http endpoint refer to resource file in the codebase. How to write this?
you can use the Camel AdviceWith feature to intercept/replace endpoints for testing...
camelContext.getRouteDefinition("myRouteId")
.adviceWith(camelContext, new AdviceWithRouteBuilder() {
#Override
public void configure() throws Exception
{
interceptSendToEndpoint("couchdb:http://localhost/database)
.skipSendToOriginalEndpoint()
.to("http://localhost:5984/test/record/doc.csv");
}
});

GAE, Log lines throw nullpointer exception in unit tests

My log lines throw nullpointer exception when I run unit tests. I get no errors when I run it on local server or upload to appengine. Have I forgotten to include a test library somewhere?
java.lang.NullPointerException
at javax.servlet.GenericServlet.getServletContext(GenericServlet.java:160)
at javax.servlet.GenericServlet.log(GenericServlet.java:254)
at se.stuff.servlet.MyServlet.doGet(MyServlet.java:14)
at se.stuff.MyServletTest.test(MyServletTest.java:14)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
...
My servlet:
public class MyServlet extends HttpServlet {
#Override
public void doGet(HttpServletRequest req, HttpServletResponse resp) throws IOException {
// Do stuff...
log("log stuff");
}
}
My test:
public class MyServletTest {
#Test
public void test() throws IOException {
MyServlet s = new MyServlet();
s.doGet(null, null);
}
}
The nulls in the test call s.doGet(null, null) cause the NullPointerException. The servlet is probably fine, but the test does not make sense. I suggest scrapping that test and first adding some content into doGet, then use QUnit to test your servlet from the outside.

Resources