Disable scalatest logging statements when running tests from maven - scalatest

What is the method to disable logging on the scalatest log4j messages:
The log4j.properties is as follows:
log4j.rootLogger=INFO,CA,FA
#Console Appender
log4j.appender.CA=org.apache.log4j.ConsoleAppender
log4j.appender.CA.layout=org.apache.log4j.PatternLayout
log4j.appender.CA.layout.ConversionPattern=%d{HH:mm:ss.SSS} %p %c: %m%n
log4j.appender.CA.Threshold = INFO
#File Appender
log4j.appender.FA=org.apache.log4j.FileAppender
log4j.appender.FA.append=false
log4j.appender.FA.file=target/unit-tests.log
log4j.appender.FA.layout=org.apache.log4j.PatternLayout
log4j.appender.FA.layout.ConversionPattern=%d{HH:mm:ss.SSS} %p %c{1}: %m%n
log4j.appender.FA.Threshold = INFO
..
log4j.logger.org.scalatest=WARN
However we are seeing INFO level scalatest log4j messages:
2014-11-30 14:25:57,263 INFO [ScalaTest-run-running-DiscoverySuite] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(840)) - hadoop.native.lib is deprecated. Instead, use io.native.lib.available
2014-11-30 14:25:57,493 INFO [ScalaTest-run-running-DiscoverySuite] hbase.HBaseCommonTestingUtility (HBaseTestingUtility.java:startMiniCluster(840)) - Starting up minicluster with 1 master(s) and 2 regionserver(s) and 2 datanode(s)
2014-11-30 14:25:57,499 INFO [ScalaTest-run-running-DiscoverySuite] hbase.HBaseCommonTestingUtility (HBaseTestingUtility.java:setupClusterTestDir(390)) - Created new mini-cluster data directory: /shared/hwspark/target/

Alternatively, you can throw this bit of code anywhere in one of your tests,
org.slf4j.LoggerFactory.getLogger(org.slf4j.Logger.ROOT_LOGGER_NAME)
.asInstanceOf[ch.qos.logback.classic.Logger]
.setLevel(ch.qos.logback.classic.Level.WARN)
which will set all logging to the WARN level.

Those log messages are not actually being printed by ScalaTest, but by something you are using from your ScalaTest tests. The reason "ScalaTest" shows up in them is that ScalaTest does change the name of threads when suites and tests are executed, so that if someone has a suite that hangs forever and does a thread dump to investigate, it is more obvious what test and suite is causing the run to hang. Log4J seems to print out the thread name in square brackets, so that can give you a hint as to where these log messages are coming from.

In my case it was slick.relational I looked at the classpath with the class reported in ScalaTest-run-running-.... information and found the class to be find in a package imported and added that specific package to logback.xml as
<logger name="slick.relational" level="INFO"/>
In your case search for HBaseTestingUtility or the other class reported there to find which jar it contains it and work out your logback logger name from the package prefix.

Related

[flink]Task manager initialization failed

I am new to flink. I am trying to run the flink example on my local PC(windows).
However, after I run the start-cluster.bat, I login to the dashboard, it shows the task manager is 0.
I checked the log and seems it fails to initialize:
2020-02-21 23:03:14,202 ERROR org.apache.flink.runtime.taskexecutor.TaskManagerRunner - TaskManager initialization failed.
org.apache.flink.configuration.IllegalConfigurationException: Failed to create TaskExecutorResourceSpec
at org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils.resourceSpec.FromConfig(TaskExecutorResourceUtils.java:72)
at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.startTaskManager(TaskManagerRunner.java:356)
at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.<init>(TaskManagerRunner.java:152)
at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.runTaskManager(TaskManagerRunner.java:308)
at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.lambda$runTaskManagerSecurely$2(TaskManagerRunner.java:322)
at org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30)
at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.runTaskManagerSecurely(TaskManagerRunner.java:321)
at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.main(TaskManagerRunner.java:287)
Caused by: org.apache.flink.configuration.IllegalConfigurationException: The required configuration option Key: 'taskmanager.cpu.cores' , default: null (fallback keys: []) is not set
at org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils.checkConfigOptionIsSet(TaskExecutorResourceUtils.java:90)
at org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils.lambda$checkTaskExecutorResourceConfigSet$0(TaskExecutorResourceUtils.java:84)
at java.util.Arrays$ArrayList.forEach(Arrays.java:3880)
at org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils.checkTaskExecutorResourceConfigSet(TaskExecutorResourceUtils.java:84)
at org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils.resourceSpecFromConfig(TaskExecutorResourceUtils.java:70)
... 7 more
2020-02-21 23:03:14,217 INFO org.apache.flink.runtime.blob.TransientBlobCache - Shutting down BLOB cache
Basically, it looks like a required option 'taskmanager.cpu.cores' is not set. However, I can't find this property in flink-conf.yaml and in the document(https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/config.html) either.
I am using flink 1.10.0. Any help would be highly appreciated!
That configuration option is intended for internal use only -- it shouldn't be user configured, which is why it isn't documented.
The windows start-cluster.bat is failing because of a bug introduced in Flink 1.10. See https://jira.apache.org/jira/browse/FLINK-15925.
One workaround is to use the bash script, start-cluster.sh, instead.
See also this mailing list thread: https://lists.apache.org/thread.html/r7693d0c06ac5ced9a34597c662bcf37b34ef8e799c32cc0edee373b2%40%3Cdev.flink.apache.org%3E

Android Manifest merger error in Codename One

In a bare bones project, I added these build hints:
android.gradleDep=compile 'com.erikagtierrez.multiple_media_picker:multiple-media-picker:1.0.5'
android.min_sdk_version=23
I would like to import the following Android library to make a CN1Lib (that requires at least Android SDK 23):
https://github.com/erikagtierrez/multiple-media-picker
To be short: I spent one day trying to import that, I also experimented with Android Studio and with suggestions found on Stack Overflow (trying to make a custom .aar), without success.
Could you help me to import that library? There is manifest merger error.
In fact, the issue reported by the build server is:
* What went wrong:
Execution failed for task ':processReleaseManifest'.
> Manifest merger failed : Attribute application#label value=(BareBones) from AndroidManifest.xml:15:17-42
is also present at [com.erikagtierrez.multiple_media_picker:multiple-media-picker:1.0.5] AndroidManifest.xml:23:9-41 value=(#string/app_name).
Suggestion: add 'tools:replace="android:label"' to <application> element at AndroidManifest.xml:15:3-43:103 to override.
I also tried to add the build hint:
android.xapplication_attr=tools:replace="android:label"
as suggested by the previous error, without success.
In the last case, I get:
Merging result: ERROR
/tmp/build1659178556337293135xxx/Test/src/main/AndroidManifest.xml:15:3-43:103 Error:
tools:replace specified at line:15 for attribute android:label, but no new value specified
/tmp/build1659178556337293135xxx/Test/src/main/AndroidManifest.xml Error:
Validation failed, exiting
-- Merging decision tree log ---
The last full log is here: https://gist.github.com/jsfan3/dd6c23f86a2ac949f996910c8cece62b
Thank you
This is happening because our code things you injected android:label on your own and doesn't inject it to avoid collision...
Change the code to this:
android.xapplication_attr=tools:replace="android:label" android:label="App Name"

Akeneo 2.2.5: No JobInstance found with code "add_to_existing_product_model"

Since the forum at akeneo.com is locked down, I posted my question here.
When I try to add Produkts to a Product-Model via mass-edit, I get the following error message:
No JobInstance found with code "add_to_existing_product_model"
[2018-06-19 19:39:31] request.INFO: Matched route "pim_enrich_mass_edit_rest_launch". {"route":"pim_enrich_mass_edit_rest_launch","route_parameters":{"_controller":"pim_enrich.controller.rest.mass_edit:launchAction","_route":"pim_enrich_mass_edit_rest_launch"},"request_uri":"http://pim.eu-trading.eu/rest/mass_edit/","method":"POST"} []
[2018-06-19 19:39:32] request.CRITICAL: Uncaught PHP Exception Symfony\Component\Translation\Exception\NotFoundResourceException: "No JobInstance found with code "add_to_existing_product_model"" at ./vendor/akeneo/pim-community-dev/src/Pim/Bundle/EnrichBundle/MassEditAction/OperationJobLauncher.php line 59 {"exception":"[object] (Symfony\\Component\\Translation\\Exception\\NotFoundResourceException(code: 0): No JobInstance found with code \"add_to_existing_product_model\" at ./vendor/akeneo/pim-community-dev/src/Pim/Bundle/EnrichBundle/MassEditAction/OperationJobLauncher.php:59)"} []
I get this error with the latest version of Akeneo 2 (v2.2.5). The product model was created manually, the products to be associated with the model came through the api.
This error looks like a missing job in the database. Did you run all the doctrine migrations?
To do so you need to launch this command:
bin/console doctrine:migrations:migrate --env=prod
If you already launched the migrations and they failed, you can install a clean 2.2.5 PIM elsewhere and dump the job instance table to be able to add the missing jobs. Here is the list of the jobs to add or update in 2.2:
- add_association
- move_to_category
- add_to_category
- remove_from_category
- add_to_existing_product_model
- compute_family_variant_structure_changes
- compute_completeness_of_products_family
- add_attribute_value
- delete_products_and_product_models

Apache Flink RollingFileAppender

I'm using Apache Flink v1.2. I wanted to switch to a rolling file appender to avoid huge log files containing data for several days. However it doesn't seem to work. I adapted the log4j Configuration (log4j.properties) as follows:
log4j.appender.file=org.apache.log4j.rolling.RollingFileAppender
log4j.appender.file.RollingPolicy=org.apache.log4j.rolling.TimeBasedRollingPolicy
log4j.appender.file.DatePattern='.' yyyy-MM-dd-a'.log'
log4j.appender.file.MaxBackupIndex = 15
log4j.appender.file.append=false
log4j.appender.file.layout=org.apache.log4j.PatternLayout
log4j.appender.file.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss,SSS} %-5p %-60c %x - %m%n
First it complains it cannot find org.apache.log4j.rolling.RollingFileAppender. So I switch it to org.apache.log4j.RollingFileAppender and then it says RollingPolicy and DatePattern are not valid attributes for the RollingFileAppender.
Did anyone else encounter same issues / can you suggest what's wrong with this configuration?
In order to use the RollingFileAppender you first have to add the apache-log4j-extras-1.2.17.jar to your classpath (e.g. adding it to Flink's lib folder).
Next you have to configure it and specify a FileNamePattern before specifying the RollingPolicy. With the following log4j.properties file I can use the RollingFileAppender.
log4j.appender.file=org.apache.log4j.rolling.RollingFileAppender
log4j.appender.file.RollingPolicy.FileNamePattern=logs/log.%d{yyyyMMdd-HHmm}.log
log4j.appender.file.RollingPolicy=org.apache.log4j.rolling.TimeBasedRollingPolicy
log4j.appender.file.append=false
log4j.appender.file.layout=org.apache.log4j.PatternLayout
log4j.appender.file.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss,SSS} %-5p %-60c %x - %m%n

How can I get AppEngine to log info level only for my app?

So I've tried configuring AppEngine logging according to this guide, ensuring I've configured the logging.properties file to be used in web.xml. I've configured logging.properties the following way:
.level = WARNING
nilsnett.chinese.backend.level = INFO
The package name of my logging wrapper is nilsnett.chinese.backend. The problem is that even with this configuration, info-level log output from my app is filtered. Evidence:
I've also tried the following config, which yielded the same result (including the logger class name at the end of the package name):
.level = WARNING
nilsnett.chinese.backend.JavaUtilLogger.level = INFO
To demonstrate that the logging.properties-file is actually read, and that I actually do write info-level logging data to app-engine in this service call, let me show you what happens when I set.level=INFO:
So my desired result is to have INFO and higher-level log outputs from my packages, while other packages, like org.datanucleus, only shows output if WARNING or more severe. In the example above, I want only the two lines marked with the purple star. Am I doing anything wrong?
change your config to:
.level = WARNING
# Set the default logging level for the datanucleus loggers
DataNucleus.JDO.level=WARNING
DataNucleus.Persistence.level=WARNING
DataNucleus.Cache.level=WARNING
DataNucleus.MetaData.level=WARNING
DataNucleus.General.level=WARNING
DataNucleus.Utility.level=WARNING
DataNucleus.Transaction.level=WARNING
DataNucleus.Datastore.level=WARNING
DataNucleus.ClassLoading.level=WARNING
DataNucleus.Plugin.level=WARNING
DataNucleus.ValueGeneration.level=WARNING
DataNucleus.Enhancer.level=WARNING
DataNucleus.SchemaTool.level=WARNING
# FinalizableReferenceQueue tries to spin up a thread and fails. This
# is inconsequential, so don't scare the user.
com.google.common.base.FinalizableReferenceQueue.level=WARNING
com.google.appengine.repackaged.com.google.common.base.FinalizableReferenceQueue.level=WARNING
this is are coming from logging config template, so to set datanucleus to warning you have todo like in this template.
https://developers.google.com/appengine/docs/java/#Logging
and then just add your own logging config:
nilsnett.chinese.backend.level = INFO
this should solve it

Resources