I am new to flink and have install yarn and flink on my macbook with M1 pro chip.
To monitor the running of flink 1.13, I installed grafana,prometheus, pushgateway the same way I found on the internet posts, and all the web ui looks fine.
Then I changed the flink-conf.yaml file as following pic and copy the flink-metrics-prometheus-1.13.6.jar to the lib folder. And restart the flink using stop-cluster.sh and start-cluster.sh.
However, the pushgateway still get no metricsfrom flink???
Can anyone tell how to fix?
Really in a hurry. Many Thankssssss!!!
I solved this problem. I think its quite tricky, should use 127.0.0.1 instead of localhost
metrics.reporter.promgateway.host: 127.0.0.1
Related
We've got several flink applications reading from Kafka topics, and they work fine. But recently we've added a new topic to the existing flink job and it started failing immediately on startup with the following root error:
Caused by: org.apache.kafka.common.KafkaException: java.lang.NoClassDefFoundError: net/jpountz/lz4/LZ4Exception
at org.apache.kafka.common.record.CompressionType$4.wrapForInput(CompressionType.java:113)
at org.apache.kafka.common.record.DefaultRecordBatch.compressedIterator(DefaultRecordBatch.java:256)
at org.apache.kafka.common.record.DefaultRecordBatch.streamingIterator(DefaultRecordBatch.java:334)
at org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.nextFetchedRecord(Fetcher.java:1208)
at org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.fetchRecords(Fetcher.java:1245)
... 7 more
I found out that this topic has the lz4 compression and guess that flink for some reason is unable to work with it. Adding lz4 dependencies directly to the app didn't work, and what's weird - it runs fine locally, but fails on the remote cluster.
The flink runtime version is 1.9.1, and we have the same version of all other dependencies in our application:
flink-streaming-java_2.11, flink-connector-kafka_2.11, flink-java and flink-clients_2.11
Could this be happening due to flink not having a dependency to the lz4 lib inside?
Found the solution. No version upgrade was needed, nor the additional dependencies to the application itself. What worked out for us is adding the lz4 library jar directly to the flink libs folder in the Docker image. After that, the error with lz4 compression disappeared.
IBM Cloud Private 2.1.0.2 ce install is failing at "Waiting for Cloudant to Start", it is a multinode cluster with CentOS 7.2, docker version 17.09 running.
I have One common boot and master node, One worker node and One proxy node. I have checked the hardware requirements and assigned more that 151GB storage on all nodes. I have also pulled the icp-datastore locally.
Can anyone please help with suggestions to solve this issue.
you should edit cloudant deployment and increase readiness timeout, so the platform can take its time to rebuild the database. can be achieved using kubectl (kubectl edit deploy xxxx) or using ICP console
I would recommend looking at the following:
Are you using firewall enabled? Always ensure the correct state before install even after an uninstall.
Are you using IPsec enabled? Try to install with IPsec disabled.
Ensure you're disabling SElinux every time you run the install.
I am unable to run flink (1.0.3) process in zepplin. It is pending and web ui is not recording the process: both in cluster and local mode. Flink itself works fine in command line and intellij. I built zeppelin as mvn clean package.
Has anyone had a similar issue? Do I need to amend zeppelin-env.sh to rectify filk? I am unable to kill process in zeppelin web ui had to use ./bin/zeppelin-daemon.sh restart
I am using Flink 1.2 but I had the same problem.
I did two thing and it worked for me.
First of all, update your version. After, in the Interpreter I changed the value host = local to your localhost IP address.
Second, kill all the procces of Flink in terminal, just use web ui of Zeppelin.
You can check everything is going fine writting:
%flink
senv
res0: org.apache.flink.streaming.api.scala.StreamExecutionEnvironment = org.apache.flink.streaming.api.scala.StreamExecutionEnvironment#48388d9f
Let me know how it is going.
Regards! :)
I am using eZPublis(4.6.0). I have set solr folder folder in my xampp folder and activated the eZFind extension in \settings\override\site.ini.append.php.
My solr is runing on port 8080 ("http://127.0.0.1:8080/solr/"). when I run "http://127.0.0.1:8080/solr/", it loads fine.
However, when I try to run command : php extension/ezfind/bin/php/updatesearchindexsolr.php -s
it shows following error “Please, ensure the server is started and the configuration of eZ Find is correct”. I am following http://harmssite.com/post/86#comment-113.
Can anyone suggest what wrong I may be doing or any other solution?
If you are sure that solr is running then you might need to edit solr.ini (or one of its overrides) and use 127.0.0.1 instead of localhost. I've faced this issue sometimes.
The default Solr port is 8983, so eZ Find out of the box is set up to look at that port. If you are sure that Solr is up and running on port 8080 then look in your solr.ini to verify you have eZ Find pointed at the right Solr port.
I am using ubuntu and I have some html pages. I want to host a website from my PC at my home. How can I do this using apache2? I am new to Apache2 if any one knows how to do this, please let me know.
The easiest way to publish HTML files with apache is by putting them in /home/your-user-name-please-do-replace-me/public_html, making sure that your apache is installed and then start apache. How to make apache start after a reboot, see this forum post on Ubuntu Forums.
When you have apache up and running, find out your servers IP-address (http://whatismyipaddress.com/ is pretty handy for this) and then your files will be accessible from: http://you-ip-address-whatever/~your-user-name-please-do-replace-me
You could always use services like http://www.dyndns.com/ so that you don't have to use your IP-address all the time.
Once apache is installed you should find that you can place content in a directory silimar to /usr/www or /usr/share/www and apache will serve it. You may also need to start apache, I don't know the ubuntu command but on fedora 12 it is:
service httpd start