The FLink version is 1.12, I follow the step(https://ci.apache.org/projects/flink/flink-docs-release-1.12/deployment/metric_reporters.html#prometheuspushgateway-orgapacheflinkmetricsprometheusprometheuspushgatewayreporter), fill my config, run my job in Flink cluster. but after a few hours, I find cannot see metric data on grafana, so i loigin server and see pushgateway log, find like "Out of memory" error log.
i dont understand, actually i set deleteOnShutdown=true and some of my jobs is closed. why pushgateway will OOM?
This problem has always existed, However, it was not described in the previous v1.13 documents. you can see the pull request to get more info.
If you want to use push model in your Flink cluster, i recommend use influxdb.
Related
I'm creating a process to handle millions of records with apache flink to support logistics data pipelines. I'm moving from kinesis sources/sink to kafka sources/sink.
However, in the flink dashboard, the job metrics are not being updated in the near-real-time. Do you know what can be wrong with the job/version?
Btw, when job is closed, then it can show all metrics... but not in near-real-time...
Job non-updating metrics picture
Fixed after cleanup conflict dependencies on "Kafka-clients" lib.
So, in my case, using also some avro & cloudevents libs with higher Kafka-clients version. Then, just need to exclude Kafka-clients from these libs and prefer flink Kafka-clients version. And this solved the issue.
I did configure the standalone Debezium and tested the streaming. After that I created a pipeline as follows
pipeline.apply("Read from DebeziumIO",
DebeziumIO.<String>read()
.withConnectorConfiguration(
DebeziumIO.ConnectorConfiguration.create()
.withUsername("user")
.withPassword("password")
.withHostName("hostname")
.withPort("1433")
.withConnectorClass(SqlServerConnector.class)
.withConnectionProperty("database.server.name", "customer")
.withConnectionProperty("database.dbname", "test001")
.withConnectionProperty("database.include.list", "test002")
.withConnectionProperty("include.schema.changes", "true")
.withConnectionProperty("database.history.kafka.bootstrap.servers", "kafka:9092")
.withConnectionProperty("database.history.kafka.topic", "schema-changes.inventory")
.withConnectionProperty("connect.keep.alive", "false")
.withConnectionProperty("connect.keep.alive.interval.ms", "200")
).withFormatFunction(new SourceRecordJson.SourceRecordJsonMapper()).withCoder(StringUtf8Coder.of())
)
When I start the pipeline using DirectRunner, datastream is not captured by the pipeline. In my pipeline code I just added code to dump the data into console for the time being.
Also from the log I observe that the Debezium is being started and stopped frequently. Is that by design?
Also when there is a change made into the DB (INSERT/DELETE/UPDATE), I dont find it being reflected in the logs.
So my question is,
Configuration what I provided is that sufficient?
Why is the pipeline not being triggered when there is a change?
What additional steps I need to perform to get it working?
Restarting debezium multiple times can it cause performance impacts. Since it creates a jdbc connection.
I have a Java application to lunch a flink job to process Kafka streaming.
The application is pending here at the job submission at flinkEnv.execute("flink job name") since the job is running forever for streamings incoming from kafka.
In this case, how can I get job id returned from the execution? I see the jobid is printing in the console. Just wonder, how to get jobid is this case without flinkEnv.execute returning yet.
How I can cancel a flink job given job name from remote server in Java?
As far as I know there is currently no nice programmatic way to control Flink. But since Flink is written in Java everything you can do with the console can also be done with internal class org.apache.flink.client.CliFrontend which is invoked by the console scripts.
An alternative would be using the REST API of the Flink JobManager.
you can use rest api to consume flink job process.
check below link: https://ci.apache.org/projects/flink/flink-docs-master/monitoring/rest_api.html.
maybe you can try to request http://host:port/jobs/overview to get all job's message that contains job's name and job's id. Such as
{"jobs":[{"jid":"d6e7b76f728d6d3715bd1b95883f8465","name":"Flink Streaming Job","state":"RUNNING","start-time":1628502261163,"end-time":-1,"duration":494208,"last-modification":1628502353963,"tasks":{"total":6,"created":0,"scheduled":0,"deploying":0,"running":6,"finished":0,"canceling":0,"canceled":0,"failed":0,"reconciling":0,"initializing":0}}]}
I really hope this will help you.
Can anybody explain me why "Configuration" section of running job in Apache Flink Dashboard is empty?
How to use this job configuration in my flow? Seems like this is not described in documentation.
The configuration tab of a running job shows the values of the ExecutionConfig. Depending on the version of Flink you might will experience a different behaviour.
Flink <= 1.0
The ExecutionConfig is only accessible for finished jobs. For running jobs, it is not possible to access it. Once the job has finished or has been stopped/cancelled, you should be able to see the ExecutionConfig.
Flink > 1.0
The ExecutionConfig can also be accessed for running jobs.
I have Solr 3.6 powering search on a Wordpress site I maintain, and this morning I saw that Sorl could not execute a data import. I was attempting to run http://example.com:9393/solr/wordpress/dataimport?command=full-import. Whereas until today the import would chug happily along, now I am getting only the message, Indexing failed. Rolled back all changes.
I'm probably missing something obvious, but where does Solr keep the data import logs? I would like to check them out to see what the problem is, but I have not been able to find the right logs.
Solr does not have exclusive log file for data-import, log statements related to data-import process are written to standard log file that Solr writes to. If you are using Tomcat it should be ../logs/catalina.out .
Error could be caused by any number of problems between Solr, Data source, perhaps the data itself. You might want to check the following questions as well
Indexing failed. Rolled back all changes. (Solr DataImport)
solr dataimport error: Indexing failed. Rolled back all changes