Flink 1.12.3 upgrade triggers `NoSuchMethodError: 'scala.collection.mutable.ArrayOps scala.Predef$.refArrayOps` - apache-flink

When I upgrade my Flink Java app from 1.12.2 to 1.12.3, I get a new runtime error. I can strip down my Flink app to this two liner:
public class TableEnvOnly {
public static void main(String[] args) throws Exception {
final StreamExecutionEnvironment streamEnv = StreamExecutionEnvironment.getExecutionEnvironment();
StreamTableEnvironment tableEnv = StreamTableEnvironment.create(streamEnv);
}
}
This works and doesn't trigger any errors with Flink version 1.12.2. When I upgrade the Maven Flink dependencies to 1.12.3, the same simple app throws the error:
Exception in thread "main" java.lang.NoSuchMethodError: 'scala.collection.mutable.ArrayOps scala.Predef$.refArrayOps(java.lang.Object[])'
at org.apache.flink.table.planner.delegation.PlannerBase.<init>(PlannerBase.scala:118)
at org.apache.flink.table.planner.delegation.StreamPlanner.<init>(StreamPlanner.scala:47)
at org.apache.flink.table.planner.delegation.BlinkPlannerFactory.create(BlinkPlannerFactory.java:48)
at org.apache.flink.table.api.bridge.java.internal.StreamTableEnvironmentImpl.create(StreamTableEnvironmentImpl.java:143)
at org.apache.flink.table.api.bridge.java.StreamTableEnvironment.create(StreamTableEnvironment.java:113)
at org.apache.flink.table.api.bridge.java.StreamTableEnvironment.create(StreamTableEnvironment.java:85)
at simple.TableEnvOnly.main(TableEnvOnly.java:12)
FYI, I'm not using Scala directly. My Gradle dependencies are:
implementation("org.apache.flink:flink-table-planner-blink_2.12:1.12.3")
implementation("org.apache.flink:flink-clients_2.12:1.12.3")
implementation("org.apache.flink:flink-connector-kafka_2.12:1.12.3")
implementation("org.apache.flink:flink-connector-jdbc_2.12:1.12.3")

TL;DR: After upgrade to Flink 1.12.4 the problem magically disappears.
Details
After upgrade from Flink 1.12.2 to Flink 1.12.3 the following code stopped to compile:
import scala.collection.JavaConverters._
val input = new DataStream[String](env.fromCollection(Seq("a", "b", "c").asJava))
val res = input.map(_.toUpperCase)
The Scala compiler reports the error:
could not find implicit value for evidence parameter of type org.apache.flink.api.common.typeinfo.TypeInformation[String]
The version of scala-compiler and scala-library is 2.12.7 - exactly as used by Flink.
To overcome the compilation problem, we provide an implicit instance of TypeInformation:
implicit val typeInfo = TypeInformation.of(classOf[String])
Then, the code compiles. Nevertheless we face the runtime failure described above:
java.lang.NoSuchMethodError: scala.Predef$.refArrayOps([Ljava/lang/Object;)Lscala/collection/mutable/ArrayOps;
at org.apache.flink.api.scala.ClosureCleaner$.getSerializedLambda(ClosureCleaner.scala:184)
at org.apache.flink.api.scala.ClosureCleaner$.org$apache$flink$api$scala$ClosureCleaner$$clean(ClosureCleaner.scala:257)
at org.apache.flink.api.scala.ClosureCleaner$.clean(ClosureCleaner.scala:168)
at org.apache.flink.streaming.api.scala.StreamExecutionEnvironment.scalaClean(StreamExecutionEnvironment.scala:859)
at org.apache.flink.streaming.api.scala.DataStream.clean(DataStream.scala:1189)
at org.apache.flink.streaming.api.scala.DataStream.map(DataStream.scala:623)
As mentioned, the upgrade to Flink 1.12.4 helps - both the compilation and the runtime failures disappear.
My guess is that some Flink 1.12.3 jars have been accidentally compiled with a wrong Scala version. The subsequent release 1.12.4 has been compiled with the correct Scala version.

Related

Upgrade dependency camel-spring-batch / spring-batch-component

When upgrading from:
https://camel.apache.org/components/2.x/spring-batch-component.html
to:
https://camel.apache.org/components/3.18.x/spring-batch-component.html
query parameter "synchronous" is dropped.
Can I assume "synchronous=true",
for camel-spring-batch 3.18.2 ?
Looking at the apache-camel project on github it was removed due to lack of components that supported a flag of that ilk.
The default job launcher in spring batch launches jobs synchronously. If you wanted to run a job asynchonously you would have to configure your own job launcher and use use an asynctaskexecutor like below
#Bean(name = "asyncJobLauncher")
public JobLauncher asyncJobLauncher() throws Exception {
SimpleJobLauncher jobLauncher = new SimpleJobLauncher();
jobLauncher.setJobRepository(jobRepository);
jobLauncher.setTaskExecutor(new SimpleAsyncTaskExecutor());
jobLauncher.afterPropertiesSet();
return jobLauncher;
}

OptaPlanner benchmarking won't run

I'm trying to run an OptaPlanner benchmark with the following configuration:
<?xml version="1.0" encoding="UTF-8"?>
<plannerBenchmark xmlns="https://www.optaplanner.org/xsd/benchmark" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="https://www.optaplanner.org/xsd/benchmark https://www.optaplanner.org/xsd/benchmark/benchmark.xsd">
<benchmarkDirectory>data/acme</benchmarkDirectory>
<inheritedSolverBenchmark>
<solver>
<solutionClass>org.acme.domain.Main</solutionClass>
<entityClass>org.acme.domain.Standstill</entityClass>
<entityClass>org.acme.domain.Visit</entityClass>
<scoreDirectorFactory>
<constraintProviderClass>org.acme.solver.MainConstraintProvider</constraintProviderClass>
<initializingScoreTrend>ONLY_DOWN</initializingScoreTrend>
</scoreDirectorFactory>
<termination>
<secondsSpentLimit>30</secondsSpentLimit>
</termination>
</solver>
<problemBenchmarks>
<solutionFileIOClass>org.acme.bootstrap.AcmeSolutionIO</solutionFileIOClass>
<inputSolutionFile>data/latest.xml</inputSolutionFile>
</problemBenchmarks>
</inheritedSolverBenchmark>
<solverBenchmark>
<solver>
<constructionHeuristic>
<constructionHeuristicType>FIRST_FIT</constructionHeuristicType>
</constructionHeuristic>
<localSearch>
<localSearchType>LATE_ACCEPTANCE</localSearchType>
</localSearch>
</solver>
</solverBenchmark>
</plannerBenchmark>
However, I get this error message:
Exception in thread "main" org.optaplanner.benchmark.api.PlannerBenchmarkException: Benchmarking failed: failureCount (1). The exception of the firstFailureSingleBenchmarkRunner (blankData_Config_0_0) is chained.
at org.optaplanner.benchmark.impl.DefaultPlannerBenchmark.benchmarkingEnded(DefaultPlannerBenchmark.java:326)
at org.optaplanner.benchmark.impl.DefaultPlannerBenchmark.benchmark(DefaultPlannerBenchmark.java:100)
at org.optaplanner.benchmark.impl.DefaultPlannerBenchmark.benchmarkAndShowReportInBrowser(DefaultPlannerBenchmark.java:424)
at org.acme.bootstrap.BenchmarkApp.main(BenchmarkApp.kt:29)
Caused by: java.lang.IllegalStateException: The class (class java.lang.Long) should have a no-arg constructor to create a planning clone.
at org.optaplanner.core.impl.domain.solution.cloner.FieldAccessingSolutionCloner.lambda$retrieveCachedConstructor$0(FieldAccessingSolutionCloner.java:90)
at java.base/java.util.concurrent.ConcurrentHashMap.computeIfAbsent(ConcurrentHashMap.java:1708)
at org.optaplanner.core.impl.domain.common.ConcurrentMemoization.computeIfAbsent(ConcurrentMemoization.java:44)
at org.optaplanner.core.impl.domain.solution.cloner.FieldAccessingSolutionCloner.retrieveCachedConstructor(FieldAccessingSolutionCloner.java:85)
at org.optaplanner.core.impl.domain.solution.cloner.FieldAccessingSolutionCloner$FieldAccessingSolutionClonerRun.constructClone(FieldAccessingSolutionCloner.java:158)
at org.optaplanner.core.impl.domain.solution.cloner.FieldAccessingSolutionCloner$FieldAccessingSolutionClonerRun.clone(FieldAccessingSolutionCloner.java:150)
at org.optaplanner.core.impl.domain.solution.cloner.FieldAccessingSolutionCloner$FieldAccessingSolutionClonerRun.process(FieldAccessingSolutionCloner.java:207)
at org.optaplanner.core.impl.domain.solution.cloner.FieldAccessingSolutionCloner$FieldAccessingSolutionClonerRun.processQueue(FieldAccessingSolutionCloner.java:194)
at org.optaplanner.core.impl.domain.solution.cloner.FieldAccessingSolutionCloner$FieldAccessingSolutionClonerRun.cloneSolution(FieldAccessingSolutionCloner.java:136)
at org.optaplanner.core.impl.domain.solution.cloner.FieldAccessingSolutionCloner.cloneSolution(FieldAccessingSolutionCloner.java:73)
at org.optaplanner.core.impl.score.director.AbstractScoreDirector.cloneSolution(AbstractScoreDirector.java:257)
at org.optaplanner.core.impl.score.director.AbstractScoreDirector.cloneWorkingSolution(AbstractScoreDirector.java:250)
at org.optaplanner.core.impl.solver.recaller.BestSolutionRecaller.updateBestSolutionAndFire(BestSolutionRecaller.java:129)
at org.optaplanner.core.impl.constructionheuristic.DefaultConstructionHeuristicPhase.phaseEnded(DefaultConstructionHeuristicPhase.java:146)
at org.optaplanner.core.impl.constructionheuristic.DefaultConstructionHeuristicPhase.solve(DefaultConstructionHeuristicPhase.java:98)
at org.optaplanner.core.impl.solver.AbstractSolver.runPhases(AbstractSolver.java:99)
at org.optaplanner.core.impl.solver.DefaultSolver.solve(DefaultSolver.java:192)
at org.optaplanner.benchmark.impl.SubSingleBenchmarkRunner.call(SubSingleBenchmarkRunner.java:122)
at org.optaplanner.benchmark.impl.SubSingleBenchmarkRunner.call(SubSingleBenchmarkRunner.java:42)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: java.lang.NoSuchMethodException: java.lang.Long.<init>()
at java.base/java.lang.Class.getConstructor0(Class.java:3585)
at java.base/java.lang.Class.getDeclaredConstructor(Class.java:2754)
at org.optaplanner.core.impl.domain.solution.cloner.FieldAccessingSolutionCloner.lambda$retrieveCachedConstructor$0(FieldAccessingSolutionCloner.java:88)
... 22 more
I've tried many other benchmark configs including with <solverBenchmarkBluePrintType>CONSTRUCTION_HEURISTIC_WITH_AND_WITHOUT_LOCAL_SEARCH</solverBenchmarkBluePrintType> but none of them work either.
I had it working previously, but I went back to it after a number of changes and it no longer works.
I'm using a SimpleLongScore now instead of a SimpleScore and I'm wondering if that has anything to do with the error since before it wasn't throwing this exception.
Any ideas?
Edit #1:
I wish I could provide more code, but I really don't know what code to include.
The bizarre thing is that OptaPlanner runs just fine through mvn compile quarkus:dev, but when I run my benchmark app, from the main entry, it gives the errors as mentioned before, java.lang.IllegalStateException: The class (class java.lang.Long) should have a no-arg constructor to create a planning clone.
Here's my main:
fun main(args: Array<String>) {
val plannerBenchmarkFactory:PlannerBenchmarkFactory = PlannerBenchmarkFactory.createFromXmlResource("benchmarkBluePrintConfig.xml")
val plannerBenchmark: PlannerBenchmark = plannerBenchmarkFactory.buildPlannerBenchmark()
plannerBenchmark.benchmarkAndShowReportInBrowser()
}
The solverConfig.xml can also be the same for both, but only the benchmark app throws the error. It's pretty odd to me.
I figured it out.
In my Visit class I was using a #CustomShadowVariable:
#DeepPlanningClone
#CustomShadowVariable(
variableListenerClass = ArrivalTimeUpdatingVariableListener::class,
sources = [PlanningVariableReference(variableName = "previousStandstill")]
)
var arrivalTime: Long? = null
Removing #DeepPlanningClone solved the problem.
I have no idea why it runs fine when the SolverManager is #Inject'd as opposed to throwing the error when SolverConfig/SolverManager is created through the API (as it is in my Benchmark App).

Ontapi Runtime Error including openllet 2.6.3

Adding opellet 2.6.3 to Maven project including Ontapi throws the following error:
Exception in thread "main" java.lang.NoSuchFieldError: TSV
at ru.avicomp.ontapi.OntFormat.<clinit>(OntFormat.java:61)
at ru.avicomp.ontapi.OntologyFactoryImpl$ONTLoaderImpl.guessFormat(OntologyFactoryImpl.java:752)
at ru.avicomp.ontapi.OntologyFactoryImpl$ONTLoaderImpl.getSupportedFormats(OntologyFactoryImpl.java:774)
at ru.avicomp.ontapi.OntologyFactoryImpl$ONTLoaderImpl.read(OntologyFactoryImpl.java:795)
at ru.avicomp.ontapi.OntologyFactoryImpl$ONTLoaderImpl.readGraph(OntologyFactoryImpl.java:725)
at ru.avicomp.ontapi.OntologyFactoryImpl$ONTLoaderImpl.loadGraph(OntologyFactoryImpl.java:580)
at ru.avicomp.ontapi.OntologyFactoryImpl$ONTLoaderImpl.load(OntologyFactoryImpl.java:286)
at ru.avicomp.ontapi.OntologyFactoryImpl.loadOWLOntology(OntologyFactoryImpl.java:109)
at ru.avicomp.ontapi.OntologyFactoryImpl.loadOWLOntology(OntologyFactoryImpl.java:58)
at ru.avicomp.ontapi.OntologyManagerImpl.load(OntologyManagerImpl.java:1678)
at ru.avicomp.ontapi.OntologyManagerImpl.load(OntologyManagerImpl.java:1644)
at ru.avicomp.ontapi.OntologyManagerImpl.loadOntologyFromOntologyDocument(OntologyManagerImpl.java:1587)
at ru.avicomp.ontapi.OntologyManager.loadOntologyFromOntologyDocument(OntologyManager.java:243)
at ru.avicomp.ontapi.OntologyManager.loadOntologyFromOntologyDocument(OntologyManager.java:259)
at ru.avicomp.ontapi.OntologyManager.loadOntologyFromOntologyDocument(OntologyManager.java:58)
The code tested is the following:
OWLOntologyManager manager = OntManagers.createONT();
OWLDataFactory factory = manager.getOWLDataFactory();
OWLOntology ontology=manager.loadOntologyFromOntologyDocument(
new File("ontologies/E1G1.owl"));
My pom file contains the following dependencies:ontapi1 .1.0, jena-arq 3.6.0, openllet-pellint 2.6.3.
Ensure you only have one owlapi version in the claaspath. The stack trace implies there are at least two versions.

Zeppelin 0.7.2 org.apache.thrift.transport.TTransportException with Spark and HighCharts

If I add this artifact to Zeppelin com.knockdata:spark-highcharts:0.6.4 it gives the error org.apache.thrift.transport.TTransportException
Even a simple example like this causes the error:
val x = Array(1,2,3,4)
val rdd = sc.parallelize(x)
The problem is definitely related to %spark as %md and %sh work. I have Spark version spark-2.1.0-bin-hadoop2.6.
There are no messages in the Spark logs. In zeppelin-interpreter-spark-root-(hostname).log it says:
com.google.gson.JsonSyntaxException: java.lang.IllegalStateException: Expected BEGIN_ARRAY but was
BEGIN_OBJECT at line 1 column 2
at com.google.gson.Gson.fromJson(Gson.java:802)
at com.google.gson.Gson.fromJson(Gson.java:757)
at com.google.gson.Gson.fromJson(Gson.java:706)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer.convert(RemoteInterprete
rServer.java:425)
org.apache.zeppelin.interpreter.InterpreterException: Exception in thread "main" java.lang.NoSuchMethodError: scala.Predef$.wrapRefArray([Ljava/lang/Object;)Lscala/collection/mutable/WrappedArray;
spark-highcharts:0.6.4 does not support zeppelin:0.7.2. There is a dependency from spark-highcharts which clearly state which zeppelin version to use and it is not binary compatible. That is why the error reported.
The version has been bumped to spark-highcharts:0.6.5 to support zeppelin:0.7.2(spark:2.1).

log4j2 how to disable "date:" lookup - log4j throws exception

edit seems not to be possible at the moment filed an issue.
i am using log4j2 in my apache camel application. In camel file names can be configured this way "?fileName=${date:now:yyyyMMdd-HHmmss}ID.${id}.gz"
if i set log level to debug camel tries to log what it is doing but log4j seems to try to lookup/interpret the string with "date:" and throws an exception:
2014-11-24 11:29:19,218 ERROR Invalid date format: "now:yyyyMMdd-HHmmss", using default java.lang.IllegalArgumentExcepti
on: Illegal pattern character 'n'
at java.text.SimpleDateFormat.compile(Unknown Source)
at java.text.SimpleDateFormat.initialize(Unknown Source)
at java.text.SimpleDateFormat.<init>(Unknown Source)
at java.text.SimpleDateFormat.<init>(Unknown Source)
at org.apache.logging.log4j.core.lookup.DateLookup.formatDate(DateLookup.java:60)
at org.apache.logging.log4j.core.lookup.DateLookup.lookup(DateLookup.java:53)
at org.apache.logging.log4j.core.lookup.Interpolator.lookup(Interpolator.java:144)
at org.apache.logging.log4j.core.lookup.StrSubstitutor.resolveVariable(StrSubstitutor.java:1008)
at org.apache.logging.log4j.core.lookup.StrSubstitutor.substitute(StrSubstitutor.java:926)
at org.apache.logging.log4j.core.lookup.StrSubstitutor.substitute(StrSubstitutor.java:816)
at org.apache.logging.log4j.core.lookup.StrSubstitutor.replace(StrSubstitutor.java:385)
at org.apache.logging.log4j.core.pattern.MessagePatternConverter.format(MessagePatternConverter.java:71)
at org.apache.logging.log4j.core.pattern.PatternFormatter.format(PatternFormatter.java:36)
at org.apache.logging.log4j.core.layout.PatternLayout.toSerializable(PatternLayout.java:189)
at org.apache.logging.log4j.core.layout.PatternLayout.toSerializable(PatternLayout.java:53)
at org.apache.logging.log4j.core.layout.AbstractStringLayout.toByteArray(AbstractStringLayout.java:52)
at org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.append(AbstractOutputStreamAppender.java:
104)
at org.apache.logging.log4j.core.config.AppenderControl.callAppender(AppenderControl.java:97)
at org.apache.logging.log4j.core.config.LoggerConfig.callAppenders(LoggerConfig.java:428)
at org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:407)
at org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:365)
at org.apache.logging.log4j.core.Logger.logMessage(Logger.java:112)
at org.apache.logging.log4j.spi.AbstractLogger.logMessage(AbstractLogger.java:1347)
at org.apache.logging.log4j.spi.AbstractLogger.logIfEnabled(AbstractLogger.java:1312)
at org.apache.logging.slf4j.Log4jLogger.debug(Log4jLogger.java:132)
at org.apache.camel.util.IntrospectionSupport.setProperty(IntrospectionSupport.java:518)
at org.apache.camel.util.IntrospectionSupport.setProperty(IntrospectionSupport.java:570)
at org.apache.camel.util.IntrospectionSupport.setProperties(IntrospectionSupport.java:454)
at org.apache.camel.util.EndpointHelper.setProperties(EndpointHelper.java:249)
at org.apache.camel.impl.DefaultComponent.setProperties(DefaultComponent.java:272)
at org.apache.camel.component.file.GenericFileComponent.createEndpoint(GenericFileComponent.java:67)
at org.apache.camel.component.file.GenericFileComponent.createEndpoint(GenericFileComponent.java:37)
at org.apache.camel.impl.DefaultComponent.createEndpoint(DefaultComponent.java:123)
at org.apache.camel.impl.DefaultCamelContext.getEndpoint(DefaultCamelContext.java:514)
at org.apache.camel.impl.DefaultCamelContext.getEndpoint(DefaultCamelContext.java:547)
Is there a way to turn off this "date:" lookup? Why does it try to interpret stuff coming from log at all? I think it should not be touched in any way?!
Edit, very easy to reproduce in test:
public class LogTest {
private Logger log = LoggerFactory.getLogger(LogTest.class);
#Test
public void test() {
log.info("${date:now:buhu}");
}
}
It is crucial to us ${date:} - only "data:now" is working.
So this problem is completely independent from camel, but camel uses ${date:...} pattern for several things. Here is a simple route that reproduces the problem - the exception will be thrown on camel set up phase - no test code needed - logging level must be "debug"!:
public class LogTest extends CamelTestSupport{
private Logger log = LoggerFactory.getLogger(LogTest.class);
#Test
public void test() {
//log.info("${date:now:yyyyMMdd-HHmmss}");
}
#Override
protected RouteBuilder createRouteBuilder() throws Exception {
return new RouteBuilder() {
#Override
public void configure() throws Exception {
from("direct:a").to("file:./?fileName=${date:now:yyyyMMdd-HHmmss}ID.${id}.gz");
}
};
}
}
This issue was fixed in 2.7 version of Log4j2.
The solution is to upgrade to that version (or higher) and add in the pattern attribute the option "{nolookups}" to %msg .
%msg{nolookups}
For example
<PatternLayout pattern="%d{yyyy-MM-dd HH:mm:ss.SSS} %-5level %class{1} %L %M %t - %msg{nolookups}%n%xEx%n" />
The problem can be avoided, if the simple-Expression is written as $simple{..} instead of ${..}. Then log4j2 won't use his Date-Lookup.
So, if you change your Route from:
from("direct:a").to("file:./?fileName=${date:now:yyyyMMdd-HHmmss}ID.${id}.gz");
to:
from("direct:a").to("file:./?fileName=$simple{date:now:yyyyMMdd-HHmmss}ID.${id}.gz");
it should work, even if you debug Camel.
To disable the date lookup locally, you can add a "$" in front of the expression:
log.info("$${date:now:buhu}");
This will print ${date:now:buhu} instead of throwing an exception printing the stack trace.
As for how to avoid this using Camel, I'm not sure. The cleanest fix would probably be a log4j2 update to disable their DateLookup feature. A temporary fix is to disable DEBUG level logs from the org.apache.camel package:
<loggers>
<logger name="org.apache.camel" level="INFO" />
<root level="debug">
<appender-ref ref="Console" />
</root>
</loggers>
It's not ideal, but we can increase the log level if we ever need to debug Camel context creation since the log statements are not necessary for general everyday development.
The correct solution is now to upgrade the log4j-core library to 2.15.0 or above. At time of writing, the latest and current recommended version is 2.16.0.
The variable substitutions happening in logged messages here are symptoms of the same feature exploited in CVE-2021-44228, aka Log4Shell.
The feature is disabled by default in 2.15.0 and removed in 2.16.0.
It's not news to anyone by now, but it's really important to take steps to disable this feature, as a security measure, even if not using Apache Camel or encountering the issue as described.
As an aside, I found this question when searching for early warning signs of the Log4Shell vulnerability. I've quoted it in my write-up.

Resources