Flink: Error while starting scala-shell - Could not create the DispatcherResourceManagerComponent - apache-flink

I am using Flink version 1.10.0 and while starting the Scala shell using 'start-scala-shell.sh', it throws an exception as follows:
Exception in thread "main" org.apache.flink.util.FlinkException: Could not create the DispatcherResourceManagerComponent.
at org.apache.flink.runtime.entrypoint.component.DefaultDispatcherResourceManagerComponentFactory.create(DefaultDispatcherResourceManagerComponentFactory.java:261)
I've changed the rest-port from 8081 to 8089, but still facing the same issue.
I even tried this with Flink 1.9.2, but faced the same issue.
Kindly help!

Related

Zeppelin: While running Spark code getting spark-interpreter-0.10.0.jar file not found

Getting following error while executing spark code through Zeppelin.
ERROR deploy.ClientEndpoint: Exception from cluster was: java.nio.file.NoSuchFileException: /opt/zeppelin/zeppelin/interpreter/spark/spark-interpreter-0.10.0.jar
java.nio.file.NoSuchFileException: /opt/zeppelin/zeppelin/interpreter/spark/spark-interpreter-0.10.0.jar
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixCopyFile.copy(UnixCopyFile.java:526)
at sun.nio.fs.UnixFileSystemProvider.copy(UnixFileSystemProvider.java:253)
at java.nio.file.Files.copy(Files.java:1274)
at org.apache.spark.util.Utils$.copyRecursive(Utils.scala:726)
at org.apache.spark.util.Utils$.copyFile(Utils.scala:697)
at org.apache.spark.util.Utils$.doFetchFile(Utils.scala:771)
at org.apache.spark.util.Utils$.fetchFile(Utils.scala:541)
at org.apache.spark.deploy.worker.DriverRunner.downloadUserJar(DriverRunner.scala:162)
at org.apache.spark.deploy.worker.DriverRunner.prepareAndRunDriver(DriverRunner.scala:180)
at org.apache.spark.deploy.worker.DriverRunner$$anon$2.run(DriverRunner.scala:99)
You should set the property deployMode to client.
I was working on setup of Zeppelin 0.10 , Spark 3.x and Yarn. This issue got resolved, please refer the image

Failed to create collection 'techproducts' due to: Underlying core creation failed while creating collection: techproducts

I just started to learn solr with official documentation and during the first exercise "Index Techproducts Example Data" I failed due to following error: " Failed to create collection 'techproducts' due to: Underlying core creation failed while creating collection: techproducts".
I tried to change java version from 13 to 8 but it didn't helped.
Here is link to the documentation: https://lucene.apache.org/solr/guide/8_5/solr-tutorial.html#exercise-1
Stacktrace from solr Admin console
Collection: techproducts operation: create failed:org.apache.solr.common.SolrException: Underlying core creation failed while creating collection: techproducts
at org.apache.solr.cloud.api.collections.CreateCollectionCmd.call(CreateCollectionCmd.java:304)
at org.apache.solr.cloud.api.collections.OverseerCollectionMessageHandler.processMessage(OverseerCollectionMessageHandler.java:263)
at org.apache.solr.cloud.OverseerTaskProcessor$Runner.run(OverseerTaskProcessor.java:504)
at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:210)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
I had run into similar situation while following Solr
's official tutorial as following
➜ solr-8.7.0 ERROR: Failed to create collection 'techproducts' due to: Underlying core creation failed while creating collection: techproducts
And problem solved my turning off my vpn. I guess the vpn routing probably messed up with solr's localhost setting somehow.
I had the same Underlying core creation failed... error too. Using Java 11, Windows 10.
The log file was ${solr-home}\example\cloud\node1\logs\solr.log. Inside it had:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://192.168.1.16:7574/solr: Error CREATEing SolrCore 'techproducts_shard1_replica_n1': Unable to create core [techproducts_shard1_replica_n1] Caused by: no segments* file found in LockValidatingDirectoryWrapper(NRTCachingDirectory(MMapDirectory#{solr_home}\example\cloud\node2\solr\techproducts_shard1_replica_n1\data\index lockFactory=org.apache.lucene.store.NativeFSLockFactory#16326253; maxCacheMB=48.0 maxMergeSizeMB=4.0)): files: [write.lock] at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:681) ~[?:?]
at (etc. etc.)e
But this was the second time I launched solr. The first time it timed out trying to contact one of the nodes and the tutorial script aborted. But the nodes were still running. I killed them off using the windows task manager and not by using solr stop. So I suspect I left an instable mess behind and the second time the tutorial ran it crashed into this mess.
I erased everything and started over from unzipping and this third time there were no timeouts and the tutorial completed without error.
File: /opt/solr/server/etc/jetty.xml
(1) Name="requestHeaderSize" set Property name "solr.jetty.request.header.size" default="81920"
(2) Name="responseHeaderSize"> set Property name="solr.jetty.response.header.size" default="81920"
(3) Restart Solr
Hm, tried this, still getting the exact same error.
After Change:
[Set name="requestHeaderSize"][Property name="solr.jetty.request.header.size" default="81920" /][/Set]
[Set name="responseHeaderSize"][Property name="solr.jetty.response.header.size" default="81920" /][/Set]
I stopped everything and retried, then I had Windows Firewall prompt me for 'SAP Machine' authorization for java 11 message, I accepted it, and retried. Then it worked. Seems Windows Firewall related.

Error while deploying flink application on EMR

I am getting this error when I deploy my flink application on EMR
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/flink/api/common/serialization/DeserializationSchema
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.hadoop.util.RunJar.run(RunJar.java:232)
Although, it works fine when I deploy on a local cluster. I am using flink 1.9.0 on EMR version 5.28.0
This issue can be connected with multiple different things. Things to check are:
Version mismatch between Flink in dependencies and Flink on EMR.
The core dependencies of Flink should be `provided. To not cause clash with the dependencies that are available on cluster.
What is your JDK version? Is it possible that there is a problem with the environment? I think it is very likely that the JDK version does not match

Apache Flink Kubernetes Job Arguments

I'm trying to setup a cluster (Apache Flink 1.6.1) with Kubernetes and get following error when I run a job on it:
2018-10-09 14:29:43.212 [main] INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - --------------------------------------------------------------------------------
2018-10-09 14:29:43.214 [main] INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Registered UNIX signal handlers for [TERM, HUP, INT]
Exception in thread "main" java.lang.NoSuchMethodError: org.apache.flink.runtime.entrypoint.ClusterConfiguration.<init>(Ljava/lang/String;Ljava/util/Properties;[Ljava/lang/String;)V
at org.apache.flink.runtime.entrypoint.EntrypointClusterConfiguration.<init>(EntrypointClusterConfiguration.java:37)
at org.apache.flink.container.entrypoint.StandaloneJobClusterConfiguration.<init>(StandaloneJobClusterConfiguration.java:41)
at org.apache.flink.container.entrypoint.StandaloneJobClusterConfigurationParserFactory.createResult(StandaloneJobClusterConfigurationParserFactory.java:78)
at org.apache.flink.container.entrypoint.StandaloneJobClusterConfigurationParserFactory.createResult(StandaloneJobClusterConfigurationParserFactory.java:42)
at org.apache.flink.runtime.entrypoint.parser.CommandLineParser.parse(CommandLineParser.java:55)
at org.apache.flink.container.entrypoint.StandaloneJobClusterEntryPoint.main(StandaloneJobClusterEntryPoint.java:153)
My job takes a configuration file (file.properties) as a parameter. This works fine in standalone mode but apparently the Kubernetes cluster cannot parse it
job-cluster-job.yaml:
args: ["job-cluster", "--job-classname", "com.test.Abcd", "-Djobmanager.rpc.address=flink-job-cluster",
"-Dparallelism.default=1", "-Dblob.server.port=6124", "-Dquery.server.ports=6125", "file.properties"]
How to fix this?
Update: The job was built for Apache 1.4.2 and this might be the issue, looking into it.
The job was built for 1.4.2, the class with the error (EntrypointClusterConfiguration.java) was added in 1.6.1 (https://github.com/apache/flink/commit/ab9bd87e521d19db7c7d783268a3532d2e876a5d#diff-d1169e00afa40576ea8e4f3c472cf858) it seems, so this caused the issue.
We updated the job's dependencies to point to new 1.6.1 release and the arguments are parsed correctly.

Error while creating web service using apache CXF wizard

I am creating a web service through the apache CXF. but while i proceed(before the wsdl gets created), i receive the following error
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/cxf/tools/java2wsdl/JavaToWSDL
This seems like the java2ws.bat file has some error.
The statement in the bat file is
"%JAVA_HOME%\bin\java" -Djava.endorsed.dirs="%CXF_HOME%\lib\endorsed" -cp "%CXF_JAR%;%TOOLS_JAR%;%CLASSPATH%" -Djava.util.logging.config.file="%CXF_HOME%\etc\logging.properties" org.apache.cxf.tools.java2ws.JavaToWS %*
It seems like at Run time,jvm is not able to find the CXF.jar.I added it in the classpath but still the same error
please help me to solve the issue
The problem arises while creating JVM. You can refer : Java Refuses to Start - Could not reserve enough space for object heap
It solved my problem.

Resources