Flink CEP: java.lang.NoSuchMethodError - apache-flink

flink run /home/admin/Documents/flink_cep/Flink-master/dist/Kinesis.jar
When I am trying to run Jar file in command line, getting error but my code is running fine in Netbeans IDE:

A NoSuchMethodError indicates a version conflict.
You should verify that you compiled your Flink job with the same Flink version as your cluster is running.

Related

Problem when running the first Flink python code

I want to run my first flink code, so I created a virtual environement and I run it with: python tab.py
I find :
What's wrong with my Pyflink setup that Python UDFs throw py4j exceptions?
but it doesn't work.

Upgrading Apache Flink need to update pom.xml?

I've just upgraded my flink from version 1.9.1 to 1.11.2 (using docker)
I have already many flink jobs running in version 1.9.1
When I try to upgrade to 1.11.1 and re run my job, it shows error.
2020-11-12 06:49:17,731 WARN org.apache.zookeeper.ClientCnxn []
- SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/tmp/jaas-1135609831848314731.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it.
2020-11-12 06:49:17,739 INFO org.apache.zookeeper.ClientCnxn [] - Opening socket connection to server xxxxxx:2181
2020-11-12 06:49:17,741 ERROR org.apache.curator.ConnectionState [] - Authentication failed
And this is the error after deploying my flink job:
Caused by: java.lang.RuntimeException: API paths not defined
and also:
java.lang.NoSuchMethodError: org.apache.flink.api.common.state.OperatorStateStore.getSerializableListState(Ljava/lang/String;)Lorg/apache/flink/api/common/state/ListState;
Do I need to change every pom for my flink jobs?
Is there any work around without changing my source code?
Thanks
Yes, you do have to rebuild your Flink jobs whenever you update the Flink version being used to run them. The libraries you use should be from the same exact version used by the Job Manager and Task Managers.
If you are trying to automate deployments for a CI/CD pipeline, you could inject the version number into the pom.xml using an environment variable -- but doing things like that can make it hard to debug when things go wrong.

apache flink windows installation

I am trying to install flink on windows running into all sorts of problems . Please help.
Downloading the tar file from net, does not give me windows bat file options. Used the download links at https://flink.apache.org/downloads.html#apache-flink-1111. So i cannot run start-local.bat. Infact i dont even have start-local.sh . I ended up installing cygwin just so i can run start cluster bat
However running the start cluster bat is giving weird issues and exiting immediately.
$ ./start-cluster.sh
Starting cluster.
Starting standalonesession daemon on host DESKTOP**.
Starting taskexecutor daemon on host DESKTOP**.
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
Improperly specified VM option 'MaxMetaspaceSize=268435456
'
This is an open bug for sometime and is deferred to Flink 1.14 version.
https://issues.apache.org/jira/browse/FLINK-18438
https://issues.apache.org/jira/browse/FLINK-18792

Kafka Flink logging issue

I am working on Kafka Flink integration actually I am done with that integration , I have written a simple word count program in Java using Flink API, when I ran it by java -jar myjarname it worked fine but when I tried to ran it with ./bin/flink run myjarname command it was giving me following error,
NoSuchMethodError:org.apache.flink.streaming.api.operators.isCheckpointingEnabled
The respected jar is there but still it is giving me above error.

Apache Zeppelin tutorial failing

Recently I installed Zeppelin from git using mvn clean package -Pspark-1.5 -Dspark.version=1.5.1 -Phadoop-2.4 -Pyarn -Ppyspark -DskipTests and I can't run the tutorial because of this error:
java.net.ConnectException
Any idea why this is happening? I haven't modified any of the conf files because I am interested in running it using the embedded Spark binaries.
I already check most of the threads here and none of them has worked.
Thanks
EDIT: I am using a Mac
Apache Zeppelin uses multi-process architecture, where ZeppelinServer process communicates with InterpreterGroup process though Apache Thrift API
This error usually indicates that ZeppelinServer process can not reach Interpreter process, running on same machine due to abnormal executor termination of the latter.
More details can be found in Interpreter process logs ./logs/zeppelin-interpreter-<interpreter name>-<username>-<hostname>.log and ZeppelinServer process logs under ./logs/zeppelin-<username>-<hostname>.log

Resources