We've got several flink applications reading from Kafka topics, and they work fine. But recently we've added a new topic to the existing flink job and it started failing immediately on startup with the following root error:
Caused by: org.apache.kafka.common.KafkaException: java.lang.NoClassDefFoundError: net/jpountz/lz4/LZ4Exception
at org.apache.kafka.common.record.CompressionType$4.wrapForInput(CompressionType.java:113)
at org.apache.kafka.common.record.DefaultRecordBatch.compressedIterator(DefaultRecordBatch.java:256)
at org.apache.kafka.common.record.DefaultRecordBatch.streamingIterator(DefaultRecordBatch.java:334)
at org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.nextFetchedRecord(Fetcher.java:1208)
at org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.fetchRecords(Fetcher.java:1245)
... 7 more
I found out that this topic has the lz4 compression and guess that flink for some reason is unable to work with it. Adding lz4 dependencies directly to the app didn't work, and what's weird - it runs fine locally, but fails on the remote cluster.
The flink runtime version is 1.9.1, and we have the same version of all other dependencies in our application:
flink-streaming-java_2.11, flink-connector-kafka_2.11, flink-java and flink-clients_2.11
Could this be happening due to flink not having a dependency to the lz4 lib inside?
Found the solution. No version upgrade was needed, nor the additional dependencies to the application itself. What worked out for us is adding the lz4 library jar directly to the flink libs folder in the Docker image. After that, the error with lz4 compression disappeared.
Related
we are recently upgrading our flink cluster to version 1.9.1. Error related to hadoop s3a occurs. The message is as below.
2020-01-16 08:39:49,283 ERROR org.apache.flink.runtime.blob.BlobServerConnection - PUT operation failed
org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme "file"
at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3332)
at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3352)
at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.FileSystem.getLocal(FileSystem.java:433)
at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:301)
at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:378)
at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.createTmpFileForWrite(LocalDirAllocator.java:456)
at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.LocalDirAllocator.createTmpFileForWrite(LocalDirAllocator.java:200)
at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.s3a.S3AFileSystem.createTmpFileForWrite(S3AFileSystem.java:572)
at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.s3a.S3ADataBlocks$DiskBlockFactory.create(S3ADataBlocks.java:811)
at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.s3a.S3ABlockOutputStream.createBlockIfNeeded(S3ABlockOutputStream.java:190)
at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.s3a.S3ABlockOutputStream.<init>(S3ABlockOutputStream.java:168)
at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:778)
at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1169)
at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1149)
at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1038)
at org.apache.flink.fs.s3.common.hadoop.HadoopFileSystem.create(HadoopFileSystem.java:141)
at org.apache.flink.fs.s3.common.hadoop.HadoopFileSystem.create(HadoopFileSystem.java:37)
at org.apache.flink.runtime.blob.FileSystemBlobStore.put(FileSystemBlobStore.java:73)
at org.apache.flink.runtime.blob.FileSystemBlobStore.put(FileSystemBlobStore.java:69)
at org.apache.flink.runtime.blob.BlobUtils.moveTempFileToStore(BlobUtils.java:444)
at org.apache.flink.runtime.blob.BlobServer.moveTempFileToStore(BlobServer.java:694)
at org.apache.flink.runtime.blob.BlobServerConnection.put(BlobServerConnection.java:351)
at org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:114)
I guess the s3 hadoop filesystem is trying to create local files but it cannot find 'file' filesystem. Can anyone advise the potential problem here?
Thanks
The plugin loader had a shortcoming in 1.9.0 and 1.9.1 that prevented the plugins from lazily loading new classes. It's fixed in the upcoming 1.9.2 and 1.10 releases.
For the time being, you could simply add the jar to the lib folder as a workaround. Note, however, that in 1.10 you can only use s3 through plugins, so keep that in mind when you would upgrade.
just trying to migrate from flink 1.3 into 1.4 and getting this exception on
linux machine:
(not reproducing at windows).
i've import this package also:
// https://mvnrepository.com/artifact/org.apache.flink/flink-shaded-hadoop2
compile group: 'org.apache.flink', name: 'flink-shaded-hadoop2', version: '1.4.0'
any help?
at flink console:
TriggerWindow(TumblingProcessingTimeWindows(10000), ReducingStateDescriptor{serializer=org.apache.flink.api.java.typeutils.runtime.TupleSerializer#cb6c5dba, reduceFunction=com.clicktale.reducers.MetricsReducer#4e406694}, ProcessingTimeTrigger(), WindowedStream.reduce(WindowedStream.java:241)) -> Sink: Unnamed (1/1)
java.util.ServiceConfigurationError: org.apache.hadoop.fs.FileSystem: Provider org.apache.hadoop.fs.LocalFileSystem not a subtype
at java.util.ServiceLoader.fail(ServiceLoader.java:239)
at java.util.ServiceLoader.access$300(ServiceLoader.java:185)
at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:376)
at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
at org.apache.hadoop.fs.FileSystem.loadFileSystems(FileSystem.java:2364)
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2375)
at org.apache.flink.runtime.fs.hdfs.HadoopFsFactory.create(HadoopFsFactory.java:99)
at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:401)
at org.apache.flink.streaming.connectors.fs.bucketing.BucketingSink.createHadoopFileSystem(BucketingSink.java:1154)
at org.apache.flink.streaming.connectors.fs.bucketing.BucketingSink.initFileSystem(BucketingSink.java:411)
at org.apache.flink.streaming.connectors.fs.bucketing.BucketingSink.initializeState(BucketingSink.java:355)
at org.apache.flink.streaming.util.functions.StreamingFunctionUtils.tryRestoreFunction(StreamingFunctionUtils.java:178)
at org.apache.flink.streaming.util.functions.StreamingFunctionUtils.restoreFunctionState(StreamingFunctionUtils.java:160)
at org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.initializeState(AbstractUdfStreamOperator.java:96)
at org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:259)
at org.apache.flink.streaming.runtime.tasks.StreamTask.initializeOperators(StreamTask.java:694)
at org.apache.flink.streaming.runtime.tasks.StreamTask.initializeState(StreamTask.java:682)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:253)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:718)
at java.lang.Thread.run(Thread.java:748)
I faced a similar (not specifically this, but dependencies related) issues migrating from 1.3 to 1.4.
In my case, I had to re-generate a fresh POM file using maven archetype and then add the needed dependencies one by one.
See Java Quickstart or Scala Quickstart.
Reason being that there has been a major rework on dependency structure. See Release notes for more information.
Note that Flink 1.4 will load any Hadoop jars found via the "hadoop classpath" shell command, and these will be first on the classpath. So if you have an incompatible version of Hadoop installed that the "hadoop" command points at, you can run into this kind of problem.
I am using Flink for streaming the data which is in the csv file. I want to put it into table format with certain schema. For this purpose I am using Flink-table_2.10-1.1.3.jar (Table api) but I got the errors:
log4j:WARN No appenders could be found for logger (org.apache.flink.api.java.typeutils.TypeExtractor).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/flink/shaded/calcite/com/google/common/base/Throwables
at org.apache.calcite.rel.metadata.JaninoRelMetadataProvider.create(JaninoRelMetadataProvider.java:450)
at org.apache.calcite.rel.metadata.JaninoRelMetadataProvider.revise(JaninoRelMetadataProvider.java:460)
at org.apache.calcite.rel.metadata.RelMetadataQuery.revise(RelMetadataQuery.java:186)
at org.apache.calcite.rel.metadata.RelMetadataQuery.collations(RelMetadataQuery.java:484)
at org.apache.calcite.rel.metadata.RelMdCollation.project(RelMdCollation.java:207)
at org.apache.calcite.rel.logical.LogicalProject$1.get(LogicalProject.java:122)
at org.apache.calcite.rel.logical.LogicalProject$1.get(LogicalProject.java:120)
at org.apache.calcite.plan.RelTraitSet.replaceIfs(RelTraitSet.java:238)
at org.apache.calcite.rel.logical.LogicalProject.create(LogicalProject.java:116)
at org.apache.calcite.rel.logical.LogicalProject.create(LogicalProject.java:108)
at org.apache.flink.api.table.plan.logical.Project.construct(operators.scala:90)
at org.apache.flink.api.table.plan.logical.Project.construct(operators.scala:85)
at org.apache.flink.api.table.plan.logical.LogicalNode.toRelNode(LogicalNode.scala:78)
at org.apache.flink.api.table.Table.getRelNode(table.scala:66)
at org.apache.flink.api.table.StreamTableEnvironment.translate(StreamTableEnvironment.scala:243)
at org.apache.flink.api.java.table.StreamTableEnvironment.toDataStream(StreamTableEnvironment.scala:147)
at table_streaming_test.main(table_streaming_test.java:90)
Caused by: java.lang.ClassNotFoundException: org.apache.flink.shaded.calcite.com.google.common.base.Throwables
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 17 more
When I explore the corresponding jar, the respective class is present there. Can you please tell that why this is happening?
Also can I get the maven source so that I can build the Flink-table .jar at my place?
I had the same problem with CEP library. I added to my pom file but I kept getting ClassNotFoundException. I even packaged it with my jar file via IntelliJ but didn't work.
If you're using their flink-quickstart archetype, I think there are some other things to change in pom file to make it work. When I created a clean project and added flink dependencies myself, I didn't get that exception anymore. You can try and see if this approach works.
You can also add flink-table JAR file to lib folder in Flink. this also fixed my problem with CEP library. the JAR file is available in Maven repository website. download the version you want.
According to the Table and SQL document on Flink website:
Note: The Table API is currently not part of the binary distribution.
See linking with it for cluster execution here.
I was also facing the same problem with Table api in flink v1.4.2.
I added flink-table_2.11-1.4.2.jar file present in opt folder to the lib folder and restarted flink.
This works for me. Hopefully works for you too :)
I followed this tutorial
to get a Bigtable client up and running in Google Managed VMs. But is there a way to run this locally? Reason is that deploying the code remotely in development is a pain.
Normally I can use dev_appserver.sh to run GAE app locally. But when I run it, I'm getting this error:
Caused by: java.lang.IllegalStateException: Jetty ALPN has not been
properly configured.
Which means we need to include ALPN library? Since our codebase is in Java 7, I used this ALPN version: 7.1.3.v20150130.
I then tried again with this:
dev_appserver.sh --jvm_flag=-Xbootclasspath/p:/Users/shouguoli/tmp/alpn-boot-7.1.3.v20150130.jar
still getting this error:
Caused by: com.google.apphosting.api.ApiProxy$CallNotFoundException:
The API package 'urlfetch' or call 'Fetch()' was not found.
How do you get it to work locally?
The sample was updated last week. It's based on the java 8 compat runtime, which means that you have access to most of the App Engine API's including Users, Task Queues, and Datastore.
There is a new Netty TCNative module that uses Boring SSL.
To use it with the pom.xml in the sample, do:
mvn clean -Pmac jetty:run -Dbigtable.projectID=<your-project> -Dbigtable.clusterID=<your-cluster> -Dbigtable.zone=<your-zone>
To use on Windows, use -Pwindows instead of -Pmac. For linux, omit the Profile -P as it's the default.
To deploy:
mvn clean gcloud:deploy -Dbigtable.projectID=<your-project> -Dbigtable.clusterID=<your-cluster> -Dbigtable.zone=<your-zone>
NOTE - it is advisable to do the clean between running locally and running remotely as the TCNative module is currently specific to the platform the code runs on.
We are in the process of updating all of our samples to use TCNative, we hope to have this by 3/10/16.
What is the official way to use the microsoft jdbc driver for mssql in a grails application?
The general opinion that I found through googling is that I only have to drop the jar in the lib directory of the grails app. This works if I do a grails clean and grails compile --refresh-dependencies. But when I deploy on a real server I have two problems.
When redeploying there is this a warning in the logs.
24.05.2013 16:03:03 com.microsoft.sqlserver.jdbc.AuthenticationJNI
WARNUNG: Failed to load the sqljdbc_auth.dll cause : no sqljdbc_auth in java.library.path
I'm not sure if its something to care about since its a warning. But I would like to have my logs clean and I have the dll in the lib directory of the application just as google is saying. Additionally on redeployment there are several messages like this that might relate to the first one:
24.05.2013 16:03:02 org.apache.catalina.loader.WebappClassLoader clearThreadLocalMap
SCHWERWIEGEND: A web application created a ThreadLocal with key of type [org.codehaus.groovy.runtime.GroovyCategorySupport.MyThreadLocal] (value [org.codehaus.groovy.runtime.GroovyCategorySupport$MyThreadLocal#76fe8d1b]) and a value of type [null] (value [null]) but failed to remove it when the web application was stopped. To prevent a memory leak, the ThreadLocal has been forcibly removed.
And the last thing is that my coworker said, that she thinks the driver should not be installed on a per application basis but directly into tomcat. I actually don't know how to do this, but if I did it, this would cause a problem on the development machine since I don't know how to get grails run-app going without the driver in the applications lib directory.
You can still place the library in the lib folder of your project and just exclude this from the war generation.
You don't need to exclude the jar everytime you build your project, just follow this post tip.
In your Tomcat server the jar will be placed in the shared lib folder instead of each web application.
If after that you still get the warning about sqljdbc_auth.dll you will need to locate this file and add the folder in the Tomcat classpath (or copy to Tomcat lib folder).