I did configure the standalone Debezium and tested the streaming. After that I created a pipeline as follows
pipeline.apply("Read from DebeziumIO",
DebeziumIO.<String>read()
.withConnectorConfiguration(
DebeziumIO.ConnectorConfiguration.create()
.withUsername("user")
.withPassword("password")
.withHostName("hostname")
.withPort("1433")
.withConnectorClass(SqlServerConnector.class)
.withConnectionProperty("database.server.name", "customer")
.withConnectionProperty("database.dbname", "test001")
.withConnectionProperty("database.include.list", "test002")
.withConnectionProperty("include.schema.changes", "true")
.withConnectionProperty("database.history.kafka.bootstrap.servers", "kafka:9092")
.withConnectionProperty("database.history.kafka.topic", "schema-changes.inventory")
.withConnectionProperty("connect.keep.alive", "false")
.withConnectionProperty("connect.keep.alive.interval.ms", "200")
).withFormatFunction(new SourceRecordJson.SourceRecordJsonMapper()).withCoder(StringUtf8Coder.of())
)
When I start the pipeline using DirectRunner, datastream is not captured by the pipeline. In my pipeline code I just added code to dump the data into console for the time being.
Also from the log I observe that the Debezium is being started and stopped frequently. Is that by design?
Also when there is a change made into the DB (INSERT/DELETE/UPDATE), I dont find it being reflected in the logs.
So my question is,
Configuration what I provided is that sufficient?
Why is the pipeline not being triggered when there is a change?
What additional steps I need to perform to get it working?
Restarting debezium multiple times can it cause performance impacts. Since it creates a jdbc connection.
Related
A newcomer to both Beam/Flink. So not sure if this question is related to Beam or to Flink. We are setting up to run Beam application using Flink runner.
I have a fairly stateless streaming application without any aggregation/states. I am basically reading from Pubsublite and do some simple transformation of data, generate a ProducerRecord of it and submit it to be two separate Kafka topics. All my experiments has been successful so far and I even got it to work locally using Minikube/Flink K8s operator etc.
Unfortunately, I am stuck in a stage where I am unable to figure out the right docs/topics to read to understand the issue. If there is any error while saving to Kafka or if Kafka is available, it seems the Pubsublite message is acked before being successfully saved into Kafka. If I restart my app after failure or anything, the original pubsublite message is not reprocessed or resent to Kafka. I am losing data in that case as it seems the message has already been acked in the previous step (I can also see there is no backlog from Google cloud console).
Ideally, my goal is that the message is only acked after we have save it to both the Kafka of if it is acked before, then the state is save locally and after restart Beam/Flink will retry just sending it to Kafka.
I initially though the way to do this is to use some form of checkpoints/savepoints but looks like they are more for stateless streaming application. Am I misunderstanding the concept?
My current code is simply:
msgs.apply("Map pubsubmessage to producerrecord", MapElements.via(new FormatPubSubMessage(options.getTopic())))
.setCoder(ProducerRecordCoder.of(VoidCoder.of(), ByteArrayCoder.of()))
.apply("Write to primary kafka topic", KafkaIO.<Void, byte[]>writeRecords()
.withBootstrapServers(options.getBootstrapServers())
.withTopic(options.getTopic())
.withKeySerializer(VoidSerializer.class)
.withValueSerializer(ByteArraySerializer.class)
);
Any pointers to docs/concepts on how one would go about achieving it?
The FLink version is 1.12, I follow the step(https://ci.apache.org/projects/flink/flink-docs-release-1.12/deployment/metric_reporters.html#prometheuspushgateway-orgapacheflinkmetricsprometheusprometheuspushgatewayreporter), fill my config, run my job in Flink cluster. but after a few hours, I find cannot see metric data on grafana, so i loigin server and see pushgateway log, find like "Out of memory" error log.
i dont understand, actually i set deleteOnShutdown=true and some of my jobs is closed. why pushgateway will OOM?
This problem has always existed, However, it was not described in the previous v1.13 documents. you can see the pull request to get more info.
If you want to use push model in your Flink cluster, i recommend use influxdb.
I have been trying to use OpenTelemetry (https://opentelemetry.io/) in an Apache Flink's job. I am sending the traces to a Kafka topic in order to see it in a Jaeger.
The traceability is working in the job when I am executing it inside my IntelliJ IDE, but once I create the package and try to execute it inside the cluster, I am not able to make it work.
Is there any blocker in that sense for Apache Flink that I am not aware of?
I have accomplished this using a variable:
export FLINK_ENV_JAVA_OPTS=-javaagent:./lib/opentelemetry-javaagent-all.jar
But this is working if I am setting up the Flink's cluster. The problem it's that the cluster that I am using is inside AWS (Kinesis Analytics) and I am not able to set up this variable.
Is there a way to use OpenTelemetry with Flink?
I have a Java application to lunch a flink job to process Kafka streaming.
The application is pending here at the job submission at flinkEnv.execute("flink job name") since the job is running forever for streamings incoming from kafka.
In this case, how can I get job id returned from the execution? I see the jobid is printing in the console. Just wonder, how to get jobid is this case without flinkEnv.execute returning yet.
How I can cancel a flink job given job name from remote server in Java?
As far as I know there is currently no nice programmatic way to control Flink. But since Flink is written in Java everything you can do with the console can also be done with internal class org.apache.flink.client.CliFrontend which is invoked by the console scripts.
An alternative would be using the REST API of the Flink JobManager.
you can use rest api to consume flink job process.
check below link: https://ci.apache.org/projects/flink/flink-docs-master/monitoring/rest_api.html.
maybe you can try to request http://host:port/jobs/overview to get all job's message that contains job's name and job's id. Such as
{"jobs":[{"jid":"d6e7b76f728d6d3715bd1b95883f8465","name":"Flink Streaming Job","state":"RUNNING","start-time":1628502261163,"end-time":-1,"duration":494208,"last-modification":1628502353963,"tasks":{"total":6,"created":0,"scheduled":0,"deploying":0,"running":6,"finished":0,"canceling":0,"canceled":0,"failed":0,"reconciling":0,"initializing":0}}]}
I really hope this will help you.
Can anybody explain me why "Configuration" section of running job in Apache Flink Dashboard is empty?
How to use this job configuration in my flow? Seems like this is not described in documentation.
The configuration tab of a running job shows the values of the ExecutionConfig. Depending on the version of Flink you might will experience a different behaviour.
Flink <= 1.0
The ExecutionConfig is only accessible for finished jobs. For running jobs, it is not possible to access it. Once the job has finished or has been stopped/cancelled, you should be able to see the ExecutionConfig.
Flink > 1.0
The ExecutionConfig can also be accessed for running jobs.