Building CN1 app doesn't work due to problem with retrolambda - codenameone

My app won't build for iPhone and Android anymore, apparently due to a problem with retrolambda.
I haven't build for quite some time (I think 6-7 months), but I don't think the way my app uses lambda functions has changed significantly (but I may be wrong, since I am not sure how to read the error message), so I was wondering if something has changed in the CN1 build process?
If not, any help on how to 'decode' the below error log to understand the problem would be really great. I think I've identified the offending code, but it seems to be written in the same way that compiled successfully before.
Executing: /usr/local/bin/pod --version Process return code is 0
Pods version: 1.10.0
User-level: 1000
Request Args:
-----------------
ios.background_modes=fetch
ios.multitasking=true
java.version=8
ios.project_type=ios
ios.testFlight=true
android.multidex=true
ios.statusbar_hidden=false
ios.application_exits=false
desktop.theme=iOS7Theme
ios.includePush=false
ios.buildType=debug
ios.interface_orientation=UIInterfaceOrientationPortrait:UIInterfaceOrientationPortraitUpsideDown:UIInterfaceOrientationLandscapeLeft:UIInterfaceOrientationLandscapeRight
ios.newStorageLocation=true
ios.enableBadgeClear=false
android.release=true
android.debug=false
-------------------
OS Version: 10.15.3
Executing: /Applications/Xcode11.3.app/Contents/Developer/usr/bin/xcodebuild -version Process return code is 0
Result is Xcode 11.3.1
Build version 11C505
Xcode version line matching pattern: Xcode 11.3.1
Executing: /Library/Java/JavaVirtualMachines/jdk1.8.0_172.jdk/Contents/Home/bin/java -Dretrolambda.inputDir=/Volumes/MacintoshHD2/temp/build4429909335558849128xxx/classes -Dretrolambda.classpath=/Volumes/MacintoshHD2/temp/build4429909335558849128xxx/classes:/var/folders/p_/xlvwhg4101z8r81_nl13cds80000gn/T/temp3257553847769871116.jar -Dretrolambda.outputDir=/Volumes/MacintoshHD2/temp/build4429909335558849128xxx/classes_retrolamda -Dretrolambda.bytecodeVersion=49 -Dretrolambda.defaultMethods=true -jar /var/folders/p_/xlvwhg4101z8r81_nl13cds80000gn/T/temp5326134886678924546.jar Retrolambda 2.5.1
00:00 INFO: Bytecode version: 49 (Java 5)
00:00 INFO: Default methods: true
00:00 INFO: Input directory: /Volumes/MacintoshHD2/temp/build4429909335558849128xxx/classes
00:00 INFO: Output directory: /Volumes/MacintoshHD2/temp/build4429909335558849128xxx/classes_retrolamda
00:00 INFO: Classpath: [/Volumes/MacintoshHD2/temp/build4429909335558849128xxx/classes, /var/folders/p_/xlvwhg4101z8r81_nl13cds80000gn/T/temp3257553847769871116.jar]
00:00 INFO: Included files: all
00:00 INFO: Agent enabled: false
00:00 INFO: Saving lambda class: com/MyApp/MyApp/MyCheckBox$$Lambda$1
00:00 INFO: Saving lambda class: com/MyApp/MyApp/MyCheckBox$$Lambda$3
00:00 INFO: Saving lambda class: com/MyApp/MyApp/MyCheckBox$$Lambda$4
00:00 INFO: Saving lambda class: com/MyApp/MyApp/MyCheckBox$$Lambda$5
00:00 INFO: Saving lambda class: com/MyApp/MyApp/MyCheckBox$$Lambda$6
00:00 INFO: Saving lambda class: com/MyApp/MyApp/MyCheckBox$$Lambda$7
00:00 INFO: Saving lambda class: com/MyApp/MyApp/MyCheckBox$$Lambda$8
00:00 INFO: Saving lambda class: com/MyApp/MyApp/MyDateAndTimePicker$$Lambda$1
00:00 ERROR: Failed to run Retrolambda
java.lang.RuntimeException: Failed to backport class: com/MyApp/MyApp/ScreenListOfItemLists
at net.orfjackal.retrolambda.Transformers.transform(Transformers.java:129)
at net.orfjackal.retrolambda.Transformers.transform(Transformers.java:107)
at net.orfjackal.retrolambda.Transformers.backportClass(Transformers.java:47)
at net.orfjackal.retrolambda.Retrolambda.run(Retrolambda.java:83)
at net.orfjackal.retrolambda.Main.main(Main.java:28)
Caused by: java.lang.RuntimeException: Failed to backport lambda or method reference: com/MyApp/MyApp/ScreenListOfItemLists.lambda$addCommandsToToolbar$1(Lcom/codename1/ui/events/ActionEvent;)V (7)
at net.orfjackal.retrolambda.lambdas.LambdaReifier.reifyLambdaClass(LambdaReifier.java:42)
at net.orfjackal.retrolambda.lambdas.BackportLambdaInvocations$InvokeDynamicInsnConverter.backportLambda(BackportLambdaInvocations.java:187)
at net.orfjackal.retrolambda.lambdas.BackportLambdaInvocations$InvokeDynamicInsnConverter.visitInvokeDynamicInsn(BackportLambdaInvocations.java:176)
at net.orfjackal.retrolambda.asm.ClassReader.readCode(ClassReader.java:1519)
at net.orfjackal.retrolambda.asm.ClassReader.readMethod(ClassReader.java:1032)
at net.orfjackal.retrolambda.asm.ClassReader.accept(ClassReader.java:708)
at net.orfjackal.retrolambda.asm.ClassReader.accept(ClassReader.java:521)
at net.orfjackal.retrolambda.Transformers.lambda$transform$4(Transformers.java:107)
at net.orfjackal.retrolambda.Transformers.transform(Transformers.java:125)
... 4 more
Caused by: java.lang.IllegalAccessException: no such method: com.MyApp.MyApp.ScreenListOfItemLists.lambda$addCommandsToToolbar$1(ActionEvent)void/invokeSpecial
at java.lang.invoke.MemberName.makeAccessException(MemberName.java:867)
at java.lang.invoke.MemberName$Factory.resolveOrFail(MemberName.java:1003)
at java.lang.invoke.MethodHandles$Lookup.resolveOrFail(MethodHandles.java:1386)
at java.lang.invoke.MethodHandles$Lookup.findSpecial(MethodHandles.java:1004)
at net.orfjackal.retrolambda.lambdas.Types.toMethodHandle(Types.java:53)
at net.orfjackal.retrolambda.lambdas.Types.asmToJdkType(Types.java:26)
at net.orfjackal.retrolambda.lambdas.LambdaReifier.callBootstrapMethod(LambdaReifier.java:106)
at net.orfjackal.retrolambda.lambdas.LambdaReifier.reifyLambdaClass(LambdaReifier.java:37)
... 12 more
Caused by: java.lang.NoClassDefFoundError: com/parse4cn1/ParseObject
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
at net.orfjackal.retrolambda.NonDelegatingClassLoader.loadClass(NonDelegatingClassLoader.java:25)
at java.lang.invoke.MethodHandleNatives.resolve(Native Method)
at java.lang.invoke.MemberName$Factory.resolve(MemberName.java:975)
at java.lang.invoke.MemberName$Factory.resolveOrFail(MemberName.java:1000)
... 18 more
Caused by: java.lang.ClassNotFoundException: com.parse4cn1.ParseObject
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at net.orfjackal.retrolambda.NonDelegatingClassLoader.loadClass(NonDelegatingClassLoader.java:27)
... 31 more
Process return code is 1

This fails in the retro-lambda stage but has nothing to do with that. See this error:
Caused by: java.lang.ClassNotFoundException: com.parse4cn1.ParseObject
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at net.orfjackal.retrolambda.NonDelegatingClassLoader.loadClass(NonDelegatingClassLoader.java:27)
... 31 more
ParseObject is missing from your build. Try deleting the build, bin, target and dist directories if they exist in your project. It seems there's a dependency that isn't properly included. Also make sure you didn't change the classpath in any way.

Related

Flink Job not getting submitted. java.io.IOException: Cannot allocate memory

I am using Flink session cluster(Kubernetes Cluster (Session Mode)) to deploy batch jobs with HA. Inside the recovery/default/blob/ directory, directories starting with job_ is getting piled up.
drwxr-xr-x 1 flink flink 1 Nov 16 09:03 job_747a694a765d1b580a703e2785a9e3fa
Job get submitted every 1 min. But in the above above ls -ltr of /recovery/default/blob/, blobs of one of the job is not getting cleared. This job has neither completed nor failed. Also it is not listed in the list on the web ui
The log file when this happens is
2021-11-22 09:03:11,537 INFO org.apache.flink.kubernetes.highavailability.KubernetesHaServices [] - Finished cleaning up the high availability data for job 6a71a36a3c82d8a9438c9aa9ed6b8993.
2021-11-22 09:03:14,904 ERROR org.apache.flink.runtime.blob.BlobServerConnection [] - PUT operation failed
java.io.IOException: Cannot allocate memory
at java.io.FileOutputStream.writeBytes(Native Method) ~[?:1.8.0_312]
at java.io.FileOutputStream.write(FileOutputStream.java:326) ~[?:1.8.0_312]
at org.apache.flink.core.fs.local.LocalDataOutputStream.write(LocalDataOutputStream.java:55) ~[flink-dist_2.11-1.14.0.jar:1.14.0]
at org.apache.flink.shaded.guava30.com.google.common.io.ByteStreams.copy(ByteStreams.java:113) ~[flink-dist_2.11-1.14.0.jar:1.14.0]
at org.apache.flink.shaded.guava30.com.google.common.io.ByteSource.copyTo(ByteSource.java:243) ~[flink-dist_2.11-1.14.0.jar:1.14.0]
at org.apache.flink.shaded.guava30.com.google.common.io.Files.copy(Files.java:301) ~[flink-dist_2.11-1.14.0.jar:1.14.0]
at org.apache.flink.runtime.blob.FileSystemBlobStore.put(FileSystemBlobStore.java:79) ~[flink-dist_2.11-1.14.0.jar:1.14.0]
at org.apache.flink.runtime.blob.FileSystemBlobStore.put(FileSystemBlobStore.java:72) ~[flink-dist_2.11-1.14.0.jar:1.14.0]
at org.apache.flink.runtime.blob.BlobUtils.moveTempFileToStore(BlobUtils.java:385) ~[flink-dist_2.11-1.14.0.jar:1.14.0]
at org.apache.flink.runtime.blob.BlobServer.moveTempFileToStore(BlobServer.java:680) ~[flink-dist_2.11-1.14.0.jar:1.14.0]
at org.apache.flink.runtime.blob.BlobServerConnection.put(BlobServerConnection.java:350) [flink-dist_2.11-1.14.0.jar:1.14.0]
at org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:110) [flink-dist_2.11-1.14.0.jar:1.14.0]
It seems that Flink is not retrying this job. Is there a config which can retry this job?

Flink job failed, Caused by: java.io.IOException: The rpc invocation size exceeds the maximum akka framesize

Flink job failed,The error information is as follows
2020-12-02 09:37:27
java.util.concurrent.CompletionException: java.lang.reflect.UndeclaredThrowableException
at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:273)
at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:280)
at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1592)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.reflect.UndeclaredThrowableException
at com.sun.proxy.$Proxy41.submitTask(Unknown Source)
at org.apache.flink.runtime.jobmaster.RpcTaskManagerGateway.submitTask(RpcTaskManagerGateway.java:77)
at org.apache.flink.runtime.executiongraph.Execution.lambda$deploy$9(Execution.java:735)
at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)
... 7 more
Caused by: java.io.IOException: The rpc invocation size exceeds the maximum akka framesize.
at org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.createRpcInvocationMessage(AkkaInvocationHandler.java:270)
at org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.invokeRpc(AkkaInvocationHandler.java:200)
at org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.invoke(AkkaInvocationHandler.java:129)
... 11 more
The logic of this job is simple,Consumption data of Kafka is saved to Clickhouse.
Start command
flink run -m yarn-cluster -p 2 -ys 2 -yjm 2048 -ytm 2048 -ynm xx --class xx /data/flink/lib/xx.jar -name --input --groupId xx --bootstrapServers xx:9092 --CheckpointInterval 60000 --CheckpointTimeout 600000 --clientId xx
Why is that? thanks
The exception means the payload of message(JM submits task to TM) exceeds max size. Try to increase the max size by adding akka.framesize to flink-conf.yaml.
The default for this is: 10485760b. Try to set a bigger number for this. Probably needing to restart the JM/TM or Flink cluster.
Doc: https://ci.apache.org/projects/flink/flink-docs-release-1.12/deployment/config.html#akka-framesize

Kylin Build Cube failed somtimes at" #19 Step Name: Hive Cleanup" java.lang.RuntimeException: Failed to read kylin_hive_conf.xml

The error occurs sometimes ,and after reboot kylin(kylin.sh stop and then kylin.sh start), it will find the conf dir location and pass this step.
I am using Kylin version "2.6.2", and KYLIN_CONF="/opt/kylin/conf" is already set correctly.
The errors hints are different , as i have countered the following:
1.
java.lang.RuntimeException: Failed to read kylin_hive_conf.xml at '/opt/apache-kylin-2.6.2-bin-hadoop3/bin/meta/kylin_hive_conf.xml'
at org.apache.kylin.common.util.SourceConfigurationUtil.loadXmlConfiguration(SourceConfigurationUtil.java:88)
at org.apache.kylin.common.util.SourceConfigurationUtil.loadHiveConfiguration(SourceConfigurationUtil.java:61)
at org.apache.kylin.common.util.HiveCmdBuilder.<init>(HiveCmdBuilder.java:48)
at org.apache.kylin.source.hive.GarbageCollectionStep.cleanUpIntermediateFlatTable(GarbageCollectionStep.java:63)
at org.apache.kylin.source.hive.GarbageCollectionStep.doWork(GarbageCollectionStep.java:49)
at org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:167)
at org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:71)
at org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:167)
at org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:114)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2.
java.lang.RuntimeException: Failed to read kylin_hive_conf.xml at '/opt/apache-kylin-2.6.2-bin-hadoop3/bin/meta/kylin_hive_conf.xml'
3.
java.lang.RuntimeException: Failed to read kylin_hive_conf.xml at '/opt/apache-kylin-2.6.2-bin-hadoop3/conf/meta/kylin_hive_conf.xml'
who can kindly help me find the root cause and fix this problem ?
Thanks in advance.
I hope you have already solve the issue. I had encounter same problem and investigated about it.
Prefer to https://github.com/apache/kylin/blob/kylin-2.6.2/engine-mr/src/main/java/org/apache/kylin/engine/mr/common/AbstractHadoopJob.java#L481
When we use MapReduce, KYLIN_CONF will be set to different folder.
System.setProperty(KylinConfig.KYLIN_CONF, metaDir.getAbsolutePath());
I think to workaround with it, we have to create simple link for all XML xml configurations.
Try to check your Kylin log
cat YOUR_PATH/apache-kylin-2.6.3-bin-hbase1x/logs/kylin.log | grep "The absolute path"
You possibly see the result
2019-10-14 23:47:04,438 INFO [LocalJobRunner Map Task Executor #0] common.AbstractHadoopJob:482 : The absolute path for meta dir is /SOME_FOLDER/meta

Katalon Studio fails to launch on Ubuntu 18.04.1 LTS, not sure why

I can't launch Katalon Studio from ubuntu and it gives me an error. I cant see any logs in the path mentioned. I've navigated to it through the terminal and by the file manager and it is empty. Any ideas as to what is going on here?
Any thoughts/resources to figure out what error or what kind of errors it could be?
I downloaded the latest version of java from oracle here. I had open JDK. Thought that could maybe be the problem, but that wasn't it. Let me know what you guys think.
error from trying to launch studio:
error from log file:
!SESSION 2018-12-26 13:21:47.986 -----------------------------------------------
eclipse.buildId=unknown
java.version=10.0.2
java.vendor=Oracle Corporation
BootLoader constants: OS=linux, ARCH=x86_64, WS=gtk, NL=en_US
Command-line arguments: -os linux -ws gtk -arch x86_64 -data config
!ENTRY com.kms.katalon 4 0 2018-12-26 13:21:49.019
!MESSAGE [SCR] Component definition XMLs not found in bundle com.kms.katalon. The component header value is OSGI-INF/component.xml
!ENTRY com.kms.katalon 4 0 2018-12-26 13:21:49.019
!MESSAGE [SCR] Component definition XMLs not found in bundle com.kms.katalon. The component header value is OSGI-INF/component.xml
!ENTRY com.kms.katalon 4 0 2018-12-26 13:21:49.380
!MESSAGE [SCR] Component definition XMLs not found in bundle com.kms.katalon. The component header value is OSGI-INF/component.xml
!ENTRY com.kms.katalon 4 0 2018-12-26 13:21:49.380
!MESSAGE [SCR] Component definition XMLs not found in bundle com.kms.katalon. The component header value is OSGI-INF/component.xml
katalon.versionNumber=5.10.0
katalon.buildNumber=1
Wed Dec 26 13:21:51 CST 2018
!ENTRY org.eclipse.osgi 4 0 2018-12-26 13:21:51.463
!MESSAGE Application error
!STACK 1
java.lang.NoClassDefFoundError: java/sql/SQLException
at com.kms.katalon.logging.LogUtil$1.call(LogUtil.java:46)
at com.kms.katalon.logging.LogUtil.logSync(LogUtil.java:88)
at com.kms.katalon.logging.LogUtil.writeError(LogUtil.java:34)
at com.kms.katalon.logging.LogUtil.logError(LogUtil.java:65)
at com.kms.katalon.logging.LogUtil.logError(LogUtil.java:28)
at com.kms.katalon.core.application.Application.internalRunGUI(Application.java:122)
at com.kms.katalon.core.application.Application.runGUI(Application.java:105)
at com.kms.katalon.core.application.Application.start(Application.java:63)
at org.eclipse.equinox.internal.app.EclipseAppHandle.run(EclipseAppHandle.java:196)
at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:134)
at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:104)
at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:388)
at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:243)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:673)
at org.eclipse.equinox.launcher.Main.basicRun(Main.java:610)
at org.eclipse.equinox.launcher.Main.run(Main.java:1519)
at org.eclipse.equinox.launcher.Main.main(Main.java:1492)
Caused by: java.lang.ClassNotFoundException: java.sql.SQLException
at java.base/java.lang.ClassLoader.findClass(ClassLoader.java:711)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:566)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:499)
at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:371)
at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:364)
at org.eclipse.osgi.internal.loader.ModuleClassLoader.loadClass(ModuleClassLoader.java:161)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:499)
... 21 more
!ENTRY org.eclipse.e4.ui.workbench 4 0 2018-12-26 13:21:51.469
!MESSAGE FrameworkEvent ERROR
!STACK 0
java.lang.NoClassDefFoundError: javax/annotation/PreDestroy
at org.eclipse.e4.core.internal.di.InjectorImpl.disposed(InjectorImpl.java:426)
at org.eclipse.e4.core.internal.di.Requestor.disposed(Requestor.java:154)
at org.eclipse.e4.core.internal.contexts.ContextObjectSupplier$ContextInjectionListener.update(ContextObjectSupplier.java:78)
at org.eclipse.e4.core.internal.contexts.TrackableComputationExt.update(TrackableComputationExt.java:111)
at org.eclipse.e4.core.internal.contexts.TrackableComputationExt.handleInvalid(TrackableComputationExt.java:74)
at org.eclipse.e4.core.internal.contexts.EclipseContext.dispose(EclipseContext.java:176)
at org.eclipse.e4.core.internal.contexts.osgi.EclipseContextOSGi.dispose(EclipseContextOSGi.java:106)
at org.eclipse.e4.core.internal.contexts.osgi.EclipseContextOSGi.bundleChanged(EclipseContextOSGi.java:139)
at org.eclipse.osgi.internal.framework.BundleContextImpl.dispatchEvent(BundleContextImpl.java:903)
at org.eclipse.osgi.framework.eventmgr.EventManager.dispatchEvent(EventManager.java:230)
at org.eclipse.osgi.framework.eventmgr.ListenerQueue.dispatchEventSynchronous(ListenerQueue.java:148)
at org.eclipse.osgi.internal.framework.EquinoxEventPublisher.publishBundleEventPrivileged(EquinoxEventPublisher.java:213)
at org.eclipse.osgi.internal.framework.EquinoxEventPublisher.publishBundleEvent(EquinoxEventPublisher.java:120)
at org.eclipse.osgi.internal.framework.EquinoxEventPublisher.publishBundleEvent(EquinoxEventPublisher.java:112)
at org.eclipse.osgi.internal.framework.EquinoxContainerAdaptor.publishModuleEvent(EquinoxContainerAdaptor.java:156)
at org.eclipse.osgi.container.Module.publishEvent(Module.java:476)
at org.eclipse.osgi.container.Module.doStop(Module.java:634)
at org.eclipse.osgi.container.Module.stop(Module.java:498)
at org.eclipse.osgi.container.SystemModule.stop(SystemModule.java:202)
at org.eclipse.osgi.internal.framework.EquinoxBundle$SystemBundle$EquinoxSystemModule$1.run(EquinoxBundle.java:165)
at java.base/java.lang.Thread.run(Thread.java:844)
Caused by: java.lang.ClassNotFoundException: javax.annotation.PreDestroy cannot be found by org.eclipse.e4.core.di_1.6.1.v20160712-0927
at org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:410)
at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:372)
at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:364)
at org.eclipse.osgi.internal.loader.ModuleClassLoader.loadClass(ModuleClassLoader.java:161)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:499)
... 21 more
I figured out what the problem was. It seems that Katalon Studio wants a specific version of openjdk. Not the newest version. Their documentation on the subject seems pretty buried when compared to MacOS/Windows installation instructions. The linux version is in beta. After uninstalling all jres/jdks, I referred to the following:
https://docs.katalon.com/katalon-studio/docs/katalon-studio-gui-beta-for-linux.html
openJDK 11 didn't work but openJDK 8 worked just fine. Hopefully they make it work with other versions eventually. Appreciate all the help I got from all of you!
The log file might be hidden, could you try to get it?
http://www.lostsaloon.com/technology/how-to-show-hidden-files-in-linux/

I am getting an error on mvn verify site

It gives me this error pointing to one of the pom files. I checked the plugin and its not null. I even updated it to the latest version. still doesn't fix it.
Execution default-site of goal org.apache.maven.plugins:maven-site-plugin:3.4:site failed: Anchor name cannot be null
It looks like that exception surfaces in a lot of cases and is incredibly unhelpful.
There was an issue a while back where not having a license name defined in the pom would throw this exception.
I ran into this exception (with version 3.4) where I had a typo in my changes.xml file (verison instead of version). I was only tipped off by seeing that it failed in a ChangesReportGenerator class. Even then I glanced over the typo several times before seeing it.
For anyone else running into this, take the time to check for even the most random typos - it might just be the problem.
For reference, here's the complete error message from the log:
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] example.products.parent ...................... SUCCESS [0.709s]
[INFO] Example Product .............................. FAILURE [2.199s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 13.162s
[INFO] Finished at: Fri Sep 02 12:53:19 CDT 2016
[INFO] Final Memory: 26M/64M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-site-plugin:3.4:site (default-site) on project example.product: Execution default-site of goal org.apache.maven.plugins:maven-site-plugin:3.4:site failed: Anchor name cannot be null! -> [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.apache.maven.plugins:maven-site-plugin:3.4:site (default-site) on project example.product: Execution default-site of goal org.apache.maven.
plugins:maven-site-plugin:3.4:site failed: Anchor name cannot be null!
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:225)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59)
at org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183)
at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:320)
at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:156)
at org.apache.maven.cli.MavenCli.execute(MavenCli.java:537)
at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:196)
at org.apache.maven.cli.MavenCli.main(MavenCli.java:141)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:290)
at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:230)
at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:409)
at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:352)
Caused by: org.apache.maven.plugin.PluginExecutionException: Execution default-site of goal org.apache.maven.plugins:maven-site-plugin:3.4:site failed: Anchor name cannot be null!
at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:110)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:209)
... 19 more
Caused by: java.lang.NullPointerException: Anchor name cannot be null!
at org.apache.maven.doxia.sink.XhtmlBaseSink.anchor(XhtmlBaseSink.java:1545)
at org.apache.maven.doxia.siterenderer.sink.SiteRendererSink.anchor(SiteRendererSink.java:253)
at org.apache.maven.doxia.sink.XhtmlBaseSink.anchor(XhtmlBaseSink.java:1533)
at org.apache.maven.plugin.issues.AbstractIssuesReportGenerator.sinkSectionTitle2Anchor(AbstractIssuesReportGenerator.java:181)
at org.apache.maven.plugin.changes.ChangesReportGenerator.constructRelease(ChangesReportGenerator.java:528)
at org.apache.maven.plugin.changes.ChangesReportGenerator.constructReleases(ChangesReportGenerator.java:511)
at org.apache.maven.plugin.changes.ChangesReportGenerator.doGenerateReport(ChangesReportGenerator.java:230)
at org.apache.maven.plugin.changes.ChangesMojo.executeReport(ChangesMojo.java:356)
at org.apache.maven.reporting.AbstractMavenReport.generate(AbstractMavenReport.java:196)
at org.apache.maven.plugins.site.render.ReportDocumentRenderer.renderDocument(ReportDocumentRenderer.java:224)
at org.apache.maven.doxia.siterenderer.DefaultSiteRenderer.renderModule(DefaultSiteRenderer.java:311)
at org.apache.maven.doxia.siterenderer.DefaultSiteRenderer.render(DefaultSiteRenderer.java:129)
at org.apache.maven.plugins.site.render.SiteMojo.renderLocale(SiteMojo.java:182)
at org.apache.maven.plugins.site.render.SiteMojo.execute(SiteMojo.java:141)
at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:101)
... 20 more
[ERROR]
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/PluginExecutionException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the command

Resources