How to upload a file to Kafka Consumer? - file

I am trying to load a file to Kafka Consumer either with Flume or directly to Kafka. I started Kafka server using this link: http://kafka.apache.org/081/quickstart.html
As mentioned in the doc, I started zookeeper and also the brokers. Then I am able to send messages from Producer to Consumer. But, I am trying to see if I can upload an input file from my local machine to Kafka.
Any advice? Thanks.

You can't load a file into Kafka Consumer. You can only write data/file in a kafka topic using Kafka Producer Api's.
So, you need to write that file into a kafka topic and then your consumers will be able to read it.

Related

Stateless Beam pipeline with flink runner - pubsublite messages getting acked befor being written to Kafka

A newcomer to both Beam/Flink. So not sure if this question is related to Beam or to Flink. We are setting up to run Beam application using Flink runner.
I have a fairly stateless streaming application without any aggregation/states. I am basically reading from Pubsublite and do some simple transformation of data, generate a ProducerRecord of it and submit it to be two separate Kafka topics. All my experiments has been successful so far and I even got it to work locally using Minikube/Flink K8s operator etc.
Unfortunately, I am stuck in a stage where I am unable to figure out the right docs/topics to read to understand the issue. If there is any error while saving to Kafka or if Kafka is available, it seems the Pubsublite message is acked before being successfully saved into Kafka. If I restart my app after failure or anything, the original pubsublite message is not reprocessed or resent to Kafka. I am losing data in that case as it seems the message has already been acked in the previous step (I can also see there is no backlog from Google cloud console).
Ideally, my goal is that the message is only acked after we have save it to both the Kafka of if it is acked before, then the state is save locally and after restart Beam/Flink will retry just sending it to Kafka.
I initially though the way to do this is to use some form of checkpoints/savepoints but looks like they are more for stateless streaming application. Am I misunderstanding the concept?
My current code is simply:
msgs.apply("Map pubsubmessage to producerrecord", MapElements.via(new FormatPubSubMessage(options.getTopic())))
.setCoder(ProducerRecordCoder.of(VoidCoder.of(), ByteArrayCoder.of()))
.apply("Write to primary kafka topic", KafkaIO.<Void, byte[]>writeRecords()
.withBootstrapServers(options.getBootstrapServers())
.withTopic(options.getTopic())
.withKeySerializer(VoidSerializer.class)
.withValueSerializer(ByteArraySerializer.class)
);
Any pointers to docs/concepts on how one would go about achieving it?

Flink Kafka job with Boundedness set to the latests offsets not working properly

We have two simple Flink jobs using Kafka connector:
first is reading a file and sending its content to Kafka topic
second is reading all previously uploaded records from Kafka and counts them
When we are creating KafkaSource for consuming records we are setting Boundedness to the latest offset like this:
KafkaSource<String> source = KafkaSource
.builder()
.setTopics(...)
.setBootstrapServers(...)
.setStartingOffsets(OffsetsInitializer.earliest())
.setBounded(OffsetsInitializer.latest())
.setDeserializer(...)
.setGroupId(...)
.build();
Unfortunately, it doesn't work as expected and the consumer job just hangs.
My observation is that if the producer app is using AT_LEAST_ONCE semantic then everything is fine and the consumer works as expected, the problem occurs only when the producer is using EXACTLY_ONCE semantic.
Additionally, when I'm running a producer job locally (using IDE), everything works fine as well (in both EXACTLY_ONCE and AT_LEAST_ONCE modes), the problem is visible only when the producer job is run on the Ververica Platform.

FlinkKafkaConsumer how to stop pulling messages

I have a Flink application that reads from a single Kafka topic.
I am trying to stop FlinkKafkaConsumer to stop pulling messages.
My final goal is to build a method to deploy my Flink application from time to time without downtime at all - how to deploy a new job without downtime.
I have tried to use "kafkaConsumer.close()" but that does not work. I am trying to stop the consumer from pulling new messages without killing the entire Job, at the same time I will upload a new Job with the updated code that reads from the same topic.
How do I do that ?
Would it be possible to send a special 'switch' message on all partitions of the kafka topic? Then you could override isEndOfStream(T nextElement) in your Kafka DeserializationSchema, and have your new job instance start working after the last switch message

Apache camel: Break task into subtasks

I need to process 1000 files after downloading from ftp , the downloading part is done using apache camel, can I also break the processing of files into sub tasks using camel. like multiple processes that camel is handling for me
you can always use the threads() API to enable concurrent processing on a route...
from("file://downloaded").threads(10).to(...);

Questions regarding Flink streaming with Kafka

I have a Java application to lunch a flink job to process Kafka streaming.
The application is pending here at the job submission at flinkEnv.execute("flink job name") since the job is running forever for streamings incoming from kafka.
In this case, how can I get job id returned from the execution? I see the jobid is printing in the console. Just wonder, how to get jobid is this case without flinkEnv.execute returning yet.
How I can cancel a flink job given job name from remote server in Java?
As far as I know there is currently no nice programmatic way to control Flink. But since Flink is written in Java everything you can do with the console can also be done with internal class org.apache.flink.client.CliFrontend which is invoked by the console scripts.
An alternative would be using the REST API of the Flink JobManager.
you can use rest api to consume flink job process.
check below link: https://ci.apache.org/projects/flink/flink-docs-master/monitoring/rest_api.html.
maybe you can try to request http://host:port/jobs/overview to get all job's message that contains job's name and job's id. Such as
{"jobs":[{"jid":"d6e7b76f728d6d3715bd1b95883f8465","name":"Flink Streaming Job","state":"RUNNING","start-time":1628502261163,"end-time":-1,"duration":494208,"last-modification":1628502353963,"tasks":{"total":6,"created":0,"scheduled":0,"deploying":0,"running":6,"finished":0,"canceling":0,"canceled":0,"failed":0,"reconciling":0,"initializing":0}}]}
I really hope this will help you.

Resources