flink - zeppelin - not responding - apache-flink

I am unable to run flink (1.0.3) process in zepplin. It is pending and web ui is not recording the process: both in cluster and local mode. Flink itself works fine in command line and intellij. I built zeppelin as mvn clean package.
Has anyone had a similar issue? Do I need to amend zeppelin-env.sh to rectify filk? I am unable to kill process in zeppelin web ui had to use ./bin/zeppelin-daemon.sh restart

I am using Flink 1.2 but I had the same problem.
I did two thing and it worked for me.
First of all, update your version. After, in the Interpreter I changed the value host = local to your localhost IP address.
Second, kill all the procces of Flink in terminal, just use web ui of Zeppelin.
You can check everything is going fine writting:
%flink
senv
res0: org.apache.flink.streaming.api.scala.StreamExecutionEnvironment = org.apache.flink.streaming.api.scala.StreamExecutionEnvironment#48388d9f
Let me know how it is going.
Regards! :)

Related

flink standalone get no metrics to pushgateway for grafana

I am new to flink and have install yarn and flink on my macbook with M1 pro chip.
To monitor the running of flink 1.13, I installed grafana,prometheus, pushgateway the same way I found on the internet posts, and all the web ui looks fine.
Then I changed the flink-conf.yaml file as following pic and copy the flink-metrics-prometheus-1.13.6.jar to the lib folder. And restart the flink using stop-cluster.sh and start-cluster.sh.
However, the pushgateway still get no metricsfrom flink???
Can anyone tell how to fix?
Really in a hurry. Many Thankssssss!!!
I solved this problem. I think its quite tricky, should use 127.0.0.1 instead of localhost
metrics.reporter.promgateway.host: 127.0.0.1

Deploying Haskell yesod docker container on google app engine

I am trying to upload a yesod Docker container on Google App Engine. The source code is here and the Docker image is here.
I followed the documentation in the Custom runtime quickstart, and when invoking gcloud app deploy the app builds fine after increasing the build timeout, but the container either the readiness check when trying to start or shows the following timeout message:
ERROR: (gcloud.app.deploy) Operation [apps/meeshkan-github-webhook-router/operations/xxxx-xxxx-xxxx] timed out. This operation may still be underway.
I have tried experimenting with several things, including a manual readiness check, creating an /_ah/health endpoint, and increasing the timeout of the readiness check all the way to 1799 seconds, but none of these actions seem to work.
One issue may be the size of the container (it is 3.2gb), and I could try to prune it down, but I'd only do that if someone could confirm that container size is a contributing factor to deployment problems. Other than that, I'm not sure what could be causing this failure. The docker image starts fine on our local machines.
Thanks in advance for your help and suggestions!
The issue turned out to be that, because I was building on Windows, images built using Docker Desktop on Windows gave all shell scripts executable permission automatically, whereas Docker on Linux needs shell scripts to be given the executable permission. By adding this line to my Dockerfile:
RUN chmod +x /usr/src/app/run.sh
Everything worked fine!

How to see console.logs of a running nodeJs application on ubuntu 18 EC2 instance?

I am new to the node world. I created a node js rest API. When I run npm start in my local machine or in the terminal for the first time, I can see console.log() in my terminal. Now, I am running the same application on an AWS Ec2 instance with Ubuntu as os. I run npm start and serve my app on port 80. I do this via ssh and after running my server I close the ssh connection. But when I reconnect via ssh, I want to see those console.log() messages in my terminal for some purpose.
I completely understand that logging messages in the terminal is not a good idea and there can be so many alternatives. Just want to know how to access the same terminal window/result as we see when I start my application.
if you are using pm2, you can try "pm2 logs"
So Nodemon won't work well in a production server or in any instance where you need to have the app going by itself.
Nodemon is a dev tool to enable you to restart your server during development. In a "real" vps you need to place the process in the background or it will be automatically killed when the connection times out.
Check out this YT series for a correct deploy architecture in an NGINX server with the use of pm2 and NGINX on a Red Hat server, I've personally used it more than once:
https://www.youtube.com/playlist?list=PLQlWzK5tU-gDyxC1JTpyC2avvJlt3hrIh

An error occurred during installation: No such plugin: cloudbees-folder

An error occurred during installation: No such plugin: cloudbees-folder (While installing Jenkins suggested plugins getting the error on windows 10)
This work for me:
http://localhost:8080/restart
OR
service jenkins restart on my Ubuntu
For Windows users, restarting the Jenkins service will resolve the issue.
For that Open Task Manager (Ctrl+Sft+Esc) -> Services -> right-click on Jenkins service and choose the option restart
If you are using the Jenkin's image in Docker if you see this error just try to execute the below command:
http://localhost:8080/safeRestart
you will get a prompt do you need to restart don't click on yes just ignore then start creating your jobs by clicking on NEW, it's just a plugin
For the docker version
1-open the browser
2- write your_ip_address:your_docker_port/restart
Note: you must have the password generated by your Jenkins server
if you do have click here
As a comment suggested: always look for official images mentioned here jenkins.io/download.
I faced the same issue when using the docker image jenkins:2.60.3, it turned out that this image isn't an official one, the official images look like jenkins/jenkins:<something>, you can find them here: https://hub.docker.com/r/jenkins/jenkins
If you are behind a firewall (local or through a VPN), there's a good chance that it's failing to download files correctly.
Try installing from another location or disabling the VPN.
Or request your network admin to allow the connections.
When I initially started my server while on a VPN there were two connection failure stack traces.
I am running Jenkins on Ubuntu in AWS.
I had this initial problem after booting up the box.
Ran the following:
sudo service jenkins restart
All working for me now.
Check the Context Path in tomcat for Jenkins that you have given during deployment and then restart/safeRestart from that path.
For example: The Path in the below screenshot is "/jenkins", So the restart URL will be
http://localhost:8080/jenkins/restart OR
http://localhost:8080/jenkins/safeRestart
I just Unchecked all the plugins and clicked on the install button. After that, it successfully takes me to the inside.
Just restart the machine. It successfully worked for me.

Apache Zeppelin tutorial failing

Recently I installed Zeppelin from git using mvn clean package -Pspark-1.5 -Dspark.version=1.5.1 -Phadoop-2.4 -Pyarn -Ppyspark -DskipTests and I can't run the tutorial because of this error:
java.net.ConnectException
Any idea why this is happening? I haven't modified any of the conf files because I am interested in running it using the embedded Spark binaries.
I already check most of the threads here and none of them has worked.
Thanks
EDIT: I am using a Mac
Apache Zeppelin uses multi-process architecture, where ZeppelinServer process communicates with InterpreterGroup process though Apache Thrift API
This error usually indicates that ZeppelinServer process can not reach Interpreter process, running on same machine due to abnormal executor termination of the latter.
More details can be found in Interpreter process logs ./logs/zeppelin-interpreter-<interpreter name>-<username>-<hostname>.log and ZeppelinServer process logs under ./logs/zeppelin-<username>-<hostname>.log

Resources