Maven surefire illegal argument exception - maven-surefire-plugin

When running Junit 5 tests on a Java jar and loading a dependency there's
warning
Corrupted STDOUT by directly writing to native stream in forked JVM 1. See FAQ web page and the dump file O:\VSTS\_work\2\s\target\surefire-reports\2019-11-04T13-14-53_351-jvmRun1.dumpstream
When I go look at the dumpstream it's full of comments like:
Corrupted STDOUT by directly writing to native stream in forked JVM 1. Stream '13:14:57.199 6960-Log dbug system Thread::GoThread Thread 6960-Log started.'.
java.lang.IllegalArgumentException: Stream stdin corrupted. Expected comma after third character in command '13:14:57.199 6960-Log dbug system Thread::GoThread Thread 6960-Log started.'.
at org.apache.maven.plugin.surefire.booterclient.output.ForkClient$OperationalData.<init>(ForkClient.java:507)
at org.apache.maven.plugin.surefire.booterclient.output.ForkClient.processLine(ForkClient.java:210)
at org.apache.maven.plugin.surefire.booterclient.output.ForkClient.consumeLine(ForkClient.java:177)
at org.apache.maven.plugin.surefire.booterclient.output.ThreadedStreamConsumer$Pumper.run(ThreadedStreamConsumer.java:88)
at java.lang.Thread.run(Thread.java:745)
What's gone wrong with the surefire booterclient?
Per Maven surefire could not find ForkedBooter class to set
<useSystemClassLoader>false</useSystemClassLoader>
resolved the dependency loading problem but not the corrupted stream.

The error resolved by setting forkCount to 0
<plugin>
<artifactId>maven-surefire-plugin</artifactId>
<version>${maven-surefire-plugin.version}</version>
<configuration>
<forkCount>0</forkCount>
<argLine>-Xmx1024m -XX:MaxPermSize=256m</argLine>
<systemPropertyVariables>
<useSystemClassLoader>false</useSystemClassLoader>
<concordion.output.dir>target/concordion</concordion.output.dir>
</systemPropertyVariables>
</configuration>
</plugin>

Related

SWUpdate on RPi4 via yocto - error parsing configuration file

After booting SWUpdate yocto-generated image for the first time, executing swupdate results in error message:
Error parsing configuration file: 'globals' section missing, exiting.
I tried to strictly follow SWUpdate's documentation, but it gets short when it comes to yocto integration. I'm using meta-swupdate, meta-swupdate-boards, and meta-openembedded layers together with poky example repository all at Kirkstone tag, building via bitbake update-image and having modyfied local.conf as:
MACHINE ??= "raspberrypi4-64"
ENABLE_UART = "1"
RPI_USE_U_BOOT = "1"
IMAGE_FSTYPES = "wic ext4.gz"
PREFERRED_PROVIDER_u-boot-fw-utils = "libubootenv"
IMAGE_INSTALL:append = " swupdate"
Is there anything else I need to modify to generate the configuration file and be able to run SWUpdate binary properly?
Side question: In the documentation, it's recommended to append swupdate-www to achieve a better web server. However, if I append it, there is no swupdate-www binary inside the `/usr/bin' directory.
As with other recipes folders the recipes-support/swupdate/swupdate/raspberrypi4-64 folder was missing inside the meta-swupdate-boards layer. Therefore, an empty config file was always generated. After adding this folder and all related files, strongly inspired by raspberrypi3 folder, the error was gone and swupdate -h provided the expected output.
There was also one new error during build process thrown by yocto. It was related to missing systemd requirement and was solved by adding:
DISTRO_FEATURES_append = " systemd"
to local.conf

Log File does not generate complete logs although execution gets completed

Please excuse me for a long post.
I am a newbie in logging implementations and I'm trying to read a log file which gets overwritten every time the build is run. The log file contains details of some workflow execution steps.
Configurations for log4j2.xml file :-
<appenders>
<File name="InfoLog" fileName="build/var/debug.log" bufferedIO="true" immediateFlush="false" append="false">
<PatternLayout pattern="%d{yyyy-MM-dd HH:mm:ss.SSS z}{UTC} ~ [%t] ~ %-5level ~ %logger{1} ~ %msg%n"/>
</File>
</appenders>
<loggers>
<Root level="Debug">
<AppenderRef ref="InfoLog"/>
</Root>
</loggers>
An example code snippet for my logger implementation :-
Note → I have api calls to other services and require those logs too
in my file, hence I used mu.KLogging library as it made it super
easy. I wasn’t able to get the logs from calls to other apis, using
java.util.logging or org.apache.log4j libraries.
import mu.KLogging
interface TestsLogger {
companion object {
val logger = KLogging().logger(" WORKFLOW STEPS ")
}
}
//In some class
logger.info { "Running ${(workflowMap.currentStep).toUpperCase()}" }
logger.error { workflowExecutionException }
Now, when I am trying to read the debug.log file (a few seconds after all steps have been executed), for some reason, not all logs are being read from the file.
TimeUnit.MILLISECONDS.sleep(5_000)
var lines: List<String> = Files.readAllLines(File("build/var/debug.log").toPath())
On diving deep into the cause of such behaviour, I found out that, when it tries to read the debug.log file, at that moment the log file does not have all the logs in it. For some reason, last few lines of the logs get appended to file at the moment, when my build gets complete. Although, the executions have already been happened for which I need information from the logs.
FYI -> I am using Gradle Build in Intellij
I am trying to understand why this strange behaviour? And how can I solve it such that logs are
generated correctly and my file reads all the logs generated till that moment ?
You have bufferedIo set to true and immediateFlush set to false. This is good for performance in long running apps that write logs frequently, but for an application that doesn't write much it means the log events will be sitting in a buffer waiting to be written. The buffers should be flushed at application shutdown, unless it is killed or has some other terminal error that doesn't allow it to terminate normally.

Cannot rename file warning while using idempotent readlock in Camel

I am using Camel version 2.17.1 over a clustered environment with 2 nodes to process files. I use an idempotent readlock in my file consumer endpoint using a JDBCMessageIdRepository as the idempotent repository to stop multiple servers trying to process the same file.
Occasionally I see warnings in the SystemOut logs of one node saying a file could not be renamed while the other node processes the same file successfully.
Warning example:
[12/10/16 10:21:03:312 BST] 0000008f SystemOut O 12 Oct 2016 10:21:03 [r-inboundCamelContext_Worker-1] FileConsumer WARN Endpoint[file://drop?delay=30000&idempotentKey=${file:name}-${file:modified}-${file:size}&idempotentRepository=#filesInIdempotentRepository&move=done/${date:now:yyyyMMdd}/${date:now:yyyyMMddHHmmssSSS}-${file:name}&moveFailed=error/${date:now:yyyyMMdd}/${date:now:yyyyMMddHHmmssSSS}-${file:name}&preMove=processing&readLock=idempotent&readLockRemoveOnCommit=true&runLoggingLevel=DEBUG&scheduler=quartz2&scheduler.cron=*+*+*+*+*+?] cannot begin processing file: GenericFile[drop\file-20161012_101320.xml] due to: Cannot rename file: GenericFile[drop\file-20161012_101320.xml] to: GenericFile[drop\processing\file-20161012_101320.xml]. Caused by: [org.apache.camel.component.file.GenericFileOperationFailedException - Cannot rename file: GenericFile[drop\file-20161012_101320.xml] to: GenericFile[drop\processing\file-20161012_101320.xml]]
org.apache.camel.component.file.GenericFileOperationFailedException: Cannot rename file: GenericFile[drop\file-20161012_101320.xml] to: GenericFile[drop\processing\file-20161012_101320.xml]
at org.apache.camel.component.file.strategy.GenericFileProcessStrategySupport.renameFile(GenericFileProcessStrategySupport.java:115)
at org.apache.camel.component.file.strategy.GenericFileRenameProcessStrategy.begin(GenericFileRenameProcessStrategy.java:43)
at org.apache.camel.component.file.GenericFileConsumer.processExchange(GenericFileConsumer.java:367)
at org.apache.camel.component.file.GenericFileConsumer.processBatch(GenericFileConsumer.java:226)
at org.apache.camel.component.file.GenericFileConsumer.poll(GenericFileConsumer.java:190)
at org.apache.camel.impl.ScheduledPollConsumer.doRun(ScheduledPollConsumer.java:175)
at org.apache.camel.impl.ScheduledPollConsumer.run(ScheduledPollConsumer.java:102)
at org.apache.camel.pollconsumer.quartz2.QuartzScheduledPollConsumerJob.execute(QuartzScheduledPollConsumerJob.java:61)
at org.quartz.core.JobRunShell.run(JobRunShell.java:202)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573)
stack
Is there a way to stop both nodes trying to process the same file?

Visual Studio Hangs After Adding Post-Build-Event

I am following this tutorial which explains how to attached post-build events to a project.
This is my .bat file (tried with and without the D: remd out):
CMD
ECHO parameter=%1
CD %1
rem D:
COPY WpfFileDeleter.exe temp.exe
"..\..\ILMerge.exe" /out:"WpfFileDeleter.exe" "temp.exe" "Microsoft.WindowsAPICodePack.dll" "Microsoft.WindowsAPICodePack.ExtendedLinguisticServices.dll" "Microsoft.WindowsAPICodePack.Sensors.dll" "Microsoft.WindowsAPICodePack.Shell.dll" "Microsoft.WindowsAPICodePack.ShellExtensions.dll"
DEL temp.exe
And I also added this ILMerge.exe.config as per the tutorial (I was getting the Unresolved assembly reference not allowed error):
<configuration>
<startup useLegacyV2RuntimeActivationPolicy="true">
<requiredRuntime safemode="true" imageVersion="v4.0.30319" version="v4.0.30319"/>
</startup>
</configuration>
But when I build my project in VS it just hangs with this message in Output:
1>------ Rebuild All started: Project: WpfFileDeleter, Configuration: Debug Any CPU ------
I can see that some files have been copied to bin/debug, such as the .dlls I specified and temp.exe, but any WpfFileDeleter.exes cannot be run properly as it has not been merged properly.
My question is How can I debug this issue? Is there some way of outputting the results of ILMerge or the build process so that I can see where it is going wrong?
I was able to resolve this by specifying the targetFramework when calling ILMerge in the batch file:
"..\..\ILMerge.exe" /out:"WpfFileDeleter.exe" /targetPlatform:"v4" "temp.exe" "Microsoft.WindowsAPICodePack.dll" "Microsoft.WindowsAPICodePack.ExtendedLinguisticServices.dll" "Microsoft.WindowsAPICodePack.Sensors.dll" "Microsoft.WindowsAPICodePack.Shell.dll" "Microsoft.WindowsAPICodePack.ShellExtensions.dll"

How to release lock of file (camel exchange) to move it on exception (corrupted gz file)

I need to implement a handler that reacts on ZipException to move away corrupted gz files, otherwise the route will endlessly retry to unmarshal the gz.
The problem is that at the moment the exception is thrown there is a lock on this file (on linux canWrite() returns false) and there is the Camel lock file.
Is there an elegant Camel way to say/configure the onException that the lock is released (get write access and remove lockfile - if there is one)?
At the moment my code looks like that:
onException(ZipException.class)
.handled(true)
.process(corruptedFileProcessor)
.stop();
Thanks in advance.
The following route reads gzipped files from srcDir, writes unzipped files to destDir (without the .gz extension) and when a ZipException occurs, sends the file to errorDir.
from("file://srcDir/?delete=true")
.onException(ZipException.class)
.handled(true).useOriginalMessage()
.to("file://errorDir?autoCreate=true")
.end()
.unmarshal().gzip()
.to("file://destDir?autoCreate=true&fileName=${file:name.noext}");

Resources