I have an Apache Beam application(using beam version 2.23.0) that I am trying to deploy on AWS EMR(emr-5.30.1) with Flink(1.10.0) preinstalled.
The application is running with no issues when I deploy it on my local docker flink cluster. But when I do
flink run -m yarn-cluster -c my_class my_jar.jar
on the master node of the EMR cluster
I get
java.util.ServiceConfigurationError: com.fasterxml.jackson.databind.Module: Provider com.fasterxml.jackson.module.jaxb.JaxbAnnotationModule not a subtype
at java.util.ServiceLoader.fail(ServiceLoader.java:239)
at java.util.ServiceLoader.access$300(ServiceLoader.java:185)
at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:376)
at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
at com.fasterxml.jackson.databind.ObjectMapper.findModules(ObjectMapper.java:1054)
at org.apache.beam.sdk.options.PipelineOptionsFactory.<clinit>(PipelineOptionsFactory.java:471)
at org.myapp.main(MainApp.java:78)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:321)
at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:205)
at org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:138)
at org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:664)
at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:213)
at org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:895)
at org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:968)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1844)
at org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:968)
Seems like the issue is with
org.apache.beam.sdk.options.PipelineOptionsFactory.<clinit>(PipelineOptionsFactory.java:471) but I am not clear on what is causing this behaviour.
Can someone please advise what may cause this?
Thank you in advance!
That is probably a classloading issue.
On EMR Flink EC2 instance, there are already some jars, and these libraries are loaded before your own dependencies. So, the version that is used at runtime is the one provided by EMR, not the one you have as a dependency in your own pom.xml.
There are multiple solutions:
in your pom.xml, use the same version than the one provided by EMR
in EC2 instance, replace the EMR version by yours
change the order of library loading
whatever the solution, you need to send to Flink all the required dependencies, no only the jar that contains your own code
Related
I am getting this error when I deploy my flink application on EMR
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/flink/api/common/serialization/DeserializationSchema
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.hadoop.util.RunJar.run(RunJar.java:232)
Although, it works fine when I deploy on a local cluster. I am using flink 1.9.0 on EMR version 5.28.0
This issue can be connected with multiple different things. Things to check are:
Version mismatch between Flink in dependencies and Flink on EMR.
The core dependencies of Flink should be `provided. To not cause clash with the dependencies that are available on cluster.
What is your JDK version? Is it possible that there is a problem with the environment? I think it is very likely that the JDK version does not match
I tried to upgrade Apache Zeppelin to use Flink 1.4.2. Checking the source of the Flink Zeppelin interpreter I did not find anything that seems materials from a Flink version point of view so I just updated the Flink verion in the pom file to 1.4.2 and run a new build from source which surprisingly worked. Running the Flink batch example notebook (or a streaming example of myself) I get the following error which I cannot properly understand
org.apache.thrift.transport.TTransportException
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)
at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:429)
at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:318)
at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:219)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
at org.apache.zeppelin.interpreter.thrift.RemoteInterpreterService$Client.recv_interpret(RemoteInterpreterService.java:274)
at org.apache.zeppelin.interpreter.thrift.RemoteInterpreterService$Client.interpret(RemoteInterpreterService.java:258)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreter$4.call(RemoteInterpreter.java:233)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreter$4.call(RemoteInterpreter.java:229)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreterProcess.callRemoteFunction(RemoteInterpreterProcess.java:135)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.interpret(RemoteInterpreter.java:228)
at org.apache.zeppelin.notebook.Paragraph.jobRun(Paragraph.java:437)
at org.apache.zeppelin.scheduler.Job.run(Job.java:183)
at org.apache.zeppelin.scheduler.RemoteScheduler$JobRunner.run(RemoteScheduler.java:305)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Would be great to get some understanding how we can move to newest Flink version in Zeppelin.
A recent thread on the Flink users mailing list covered how to update Zeppelin to use Flink 1.4.2. The email included an image that doesn't appear in the archive, so I'll include the relevant info here:
You need to set two properties in Zeppelin, which point to the jobmanager of the flink cluster to be used by the zeppelin server:
jobmanager.rpc.host
jobmanager.rpc.port
Note that there are also properties named host and port, but they aren't used for this purpose.
And you'll need to build Zeppelin using profile scala-2.11 and Flink 1.4.2.
I tried to deploy my application to flink on yarn with cli, Unfortunately,it's fail with below Exception
java.lang.NoClassDefFoundError: Lredis/clients/jedis/JedisCluster;
at java.lang.Class.getDeclaredFields0(Native Method)
at java.lang.Class.privateGetDeclaredFields(Class.java:2583)
at java.lang.Class.getDeclaredFields(Class.java:1916)
at org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:72)
at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.clean(StreamExecutionEnvironment.java:1548)
at org.apache.flink.streaming.api.datastream.DataStream.clean(DataStream.java:183)
at org.apache.flink.streaming.api.datastream.DataStream.flatMap(DataStream.java:551)
at org.apache.flink.streaming.api.scala.DataStream.flatMap(DataStream.scala:594)
at com.hypers.hwt.realtime.top.HwtRealTimeTopRunner.executeLateStream(HwtRealTimeTop.scala:138)
at com.hypers.hwt.realtime.top.HwtRealTimeTopRunner.run(HwtRealTimeTop.scala:72)
at com.hypers.hwt.realtime.top.HwtRealTimeTop$.main(HwtRealTimeTop.scala:265)
at com.hypers.hwt.realtime.top.HwtRealTimeTop.main(HwtRealTimeTop.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:528)
at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:419)
at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:381)
at org.apache.flink.client.CliFrontend.executeProgram(CliFrontend.java:838)
at org.apache.flink.client.CliFrontend.run(CliFrontend.java:259)
at org.apache.flink.client.CliFrontend.parseParameters(CliFrontend.java:1086)
at org.apache.flink.client.CliFrontend$2.call(CliFrontend.java:1133)
at org.apache.flink.client.CliFrontend$2.call(CliFrontend.java:1130)
at org.apache.flink.runtime.security.HadoopSecurityContext$1.run(HadoopSecurityContext.java:43)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709)
at org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:40)
at org.apache.flink.client.CliFrontend.main(CliFrontend.java:1130)
I already use -yt parameter to distribute my external jars,but still fail.
Actually,flink submit job with 3 step:
wrap code and build graph in client
client submit job to jobmanager
jobmanager distribute job to taskmanager
problem
In long time test,I found this Exception is happen in step1. And step1 is run in local by YarnClusterClient. And I know this problem will be solved by add my external jars in $FLINK_HOME/lib,but it will cause conflict with other application
Expect
So I want to know if there are any way to add external jars class path in local?
Addtion
class LateFlatMap(conf: FlinkJedisClusterConfig) extends RichFlatMapFunction[(PvAccBean, UvAccBean), Iterable[(String, Array[Byte])]] {
var jedisCluster: JedisCluster = null
override def open(properties: Configuration): Unit = {
val genericObjectPoolConfig = new GenericObjectPoolConfig()
genericObjectPoolConfig.setMaxIdle(conf.getMaxIdle())
genericObjectPoolConfig.setMaxTotal(conf.getMaxTotal())
genericObjectPoolConfig.setMinIdle(conf.getMinIdle())
jedisCluster = new JedisCluster(conf.getNodes(), conf.getConnectionTimeout(),
conf.getMaxRedirections(), genericObjectPoolConfig)
}
#Override
override def close(): Unit = {
jedisCluster.close()
}
...
}
Basically I see two possibilities:
Add your 3rd party libs to your jobs jar by building a fat jar. Each major build system can do so (e.g. Maven Assembly Plugin or SBT Assembly Plugin). This would be my preferred solution
If you want to use your 3rd party libs for all of your flink jobs you can add them to flinks jars directory before your start your cluster. This will also work but offer you less flexibility.
Hope that helps
Tried all options combined -C and -yt, external jars added in classpaths and added in yarn.ship.directories but failing when initializing mq connection factory. Same working when placed in flink lib.
Wondering still this is not working at end of 2020
Try using
bin/start-scala-shell.sh local -a <full_external_jar_path>
I would like to write to S3 from Flink 1.4.2 using presto interface and BucketingSink. I followed the instructions, added in flink-conf.yaml s3.access-key and s3.secret-key and put flink-s3-fs-presto-1.4.2.jar in lib folder. Below is error that is produced.
If job is executed in AWS environment I hope that I don't need to set up keys at all. I is this assumption correct.
java.lang.IllegalArgumentException: AWS Access Key ID and Secret Access Key must be specified as the username or password (respectively) of a s3 URL, or by setting the fs.s3.awsAccessKeyId or fs.s3.awsSecretAccessKey properties (respectively).
at org.apache.hadoop.fs.s3.S3Credentials.initialize(S3Credentials.java:70)
at org.apache.hadoop.fs.s3.Jets3tFileSystemStore.initialize(Jets3tFileSystemStore.java:93)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy17.initialize(Unknown Source)
at org.apache.hadoop.fs.s3.S3FileSystem.initialize(S3FileSystem.java:91)
at org.apache.flink.streaming.connectors.fs.bucketing.BucketingSink.createHadoopFileSystem(BucketingSink.java:1206)
at org.apache.flink.streaming.connectors.fs.bucketing.BucketingSink.initFileSystem(BucketingSink.java:411)
at org.apache.flink.streaming.connectors.fs.bucketing.BucketingSink.initializeState(BucketingSink.java:355)
at org.apache.flink.streaming.util.functions.StreamingFunctionUtils.tryRestoreFunction(StreamingFunctionUtils.java:178)
at org.apache.flink.streaming.util.functions.StreamingFunctionUtils.restoreFunctionState(StreamingFunctionUtils.java:160)
at org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.initializeState(AbstractUdfStreamOperator.java:96)
at org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:258)
at org.apache.flink.streaming.runtime.tasks.StreamTask.initializeOperators(StreamTask.java:694)
at org.apache.flink.streaming.runtime.tasks.StreamTask.initializeState(StreamTask.java:682)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:253)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:718)
at java.lang.Thread.run(Thread.java:748)
The application seems not to be using the flink-s3-fs-presto at all, but Hadoop's deprecated old S3 File System. The stack trace you pasted indicates that the flink-s3-fs-presto is not picked up for the file system scheme 's3://'.
Please make sure that the flink-s3-fs-presto JAR file is really in the lib folder of the TaskManagers that execute the job, not only on the client.
When you use YARN or Mesos to deploy Flink jobs, that should automatically happen.
When you deploy Flink via containers, make sure that the JAR file is in the lib folder of your container image.
When you run Flink TaskManagers standalone or manually, make sure all TaskManagers in the cluster have the JAR file in the lob folder before being started.
I have a program in Apache Flink. I tested and ran it on the local machine and every thing works fine. To run the program on a remote cluster, I did necessary changes as mentioned in Apache Flink Official Website.
I did the following changes:
The two points below
ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
ExecutionEnvironment env = ExecutionEnvironment.createRemoteEnvironment("taskManagerName",
portNo,paralelismNo);
Fixing the necessary paths to read input files and write outputs.
Generate a thin jar out of the program and put the necessary jar
libraries into a folder besides my project jar file called
myproj.jar.
copying the data and the jar library and myproj.jar into the cluster
and run the following command remotely on the cluster:
java -cp pathToJarLib \\* -jar myproj.jar
But, I get the below error and I don't have any clue to fix the issue. There are no relevant log files which can aid me in fixing this issue.
Error:
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/flink/api/common/functions/MapFunction
at java.lang.Class.getDeclaredMethods0(Native Method)
at java.lang.Class.privateGetDeclaredMethods(Class.java:2570)
at java.lang.Class.getMethod0(Class.java:2813)
at java.lang.Class.getMethod(Class.java:1663)
at sun.launcher.LauncherHelper.getMainMethod(LauncherHelper.java:494)
at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:486)
Caused by: java.lang.ClassNotFoundException: org.apache.flink.api.common.functions.MapFunction
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
... 6 more
Your classpath is obviously not complete. Try to submit via bin/flink run myproj.jar. This sets up the classpath correctly.