Molecule fails with "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result" - ansible-2.x

I am using Ansible with Molecule. I just ran into the situation that converging my role failed with:
fatal: [instance]: FAILED! => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result"}
How to mitigate? Hold on a sec, I will answer ...

What worked for me is to set logging to "true" for molecule:
Go to the "molecule.yml" file, this is where you do your configuration for molecule.
You should find it in the molecule/default/ directory
Look for the provisioner: section
Add log: true to it.
Voilà!
That's how it looks:
provisioner:
name: ansible
log: true
Note that there may by other settings in this very section for the provisioner.

Related

Unable to install devstack with designate

I am new to the OpenStack environment and started to get into it with a small DevStack setup. I worked the following instructions on a Ubuntu 18.04 machine through and everything worked fine. In order to play with some dns zones I started to research about designate. After adapting the following instructions to my setup I got some errors.
Executing stack.sh produces the following error:
++/opt/stack/designate/devstack/plugin.sh:source:5 set +o xtrace
2021-01-12 21:44:39.009 | Initializing Designate
DROP DATABASE
Could not load 'database': type object 'deprecated' has no attribute 'WALLABY'
Could not load 'pool': type object 'deprecated' has no attribute 'WALLABY'
Could not load 'tlds': type object 'deprecated' has no attribute 'WALLABY'
usage: designate [-h] [--config-dir DIR] [--config-file PATH] [--debug]
[--log-config-append PATH] [--log-date-format DATE_FORMAT]
[--log-dir LOG_DIR] [--log-file PATH] [--nodebug]
[--nouse-journal] [--nouse-json] [--nouse-syslog]
[--nowatch-log-file]
[--syslog-log-facility SYSLOG_LOG_FACILITY] [--use-journal]
[--use-json] [--use-syslog] [--watch-log-file]
{} ...
designate: error: argument category: invalid choice: 'database' (choose from )
Error on exit
World dumping... see /opt/stack/logs/worlddump-2021-01-12-214442.txt for details
nova-compute: no process found
neutron-dhcp-agent: no process found
neutron-l3-agent: no process found
neutron-metadata-agent: no process found
neutron-openvswitch-agent: no process found
I was not sure if my setup was legit. So I tried to use the example config from the designate tutorial. But the same problem occurred.
My actual local.conf:
[[local|localrc]]
USE_PYTHON3=True
ADMIN_PASSWORD=***
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
SERVICE_TOKEN=$ADMIN_PASSWORD
DEST=/opt/stack
SERVICE_HOST=192.168.1.***
HOST_IP=$SERVICE_HOST
disable_service mysql
enable_service postgresql
enable_plugin designate https://opendev.org/openstack/designate
enable_service tempest
Checking the plugin.sh. It looks like the error occurred from this function:
function init_designate {
# (Re)create designate database
recreate_database designate utf8
# Init and migrate designate database
$DESIGNATE_BIN_DIR/designate-manage database sync
init_designate_backend
}
Hope somebody can give me a hint to run DevStack with designate.
Thanks in advance.
The issue you are having is a version mismatch with the cloud install and the designate plugin. Designate is expecting a newer verison of the oslo_log package.
Check that the "devstack" version you have checked out is on the master branch.
The line:
enable_plugin designate https://opendev.org/openstack/designate
Is pulling the master branch of designate for the devstack plugin.
If you are trying to install on a stable branch version OpenStack, you will need to specify a reference for the devstack plugin as well (example, stable/victoria):
enable_plugin designate https://opendev.org/openstack/designate stable/victoria
As mentioned above, you will also need to enable the designate services:
enable_service designate,designate-central,designate-api,designate-worker,designate-producer,designate-mdns

[flink]Task manager initialization failed

I am new to flink. I am trying to run the flink example on my local PC(windows).
However, after I run the start-cluster.bat, I login to the dashboard, it shows the task manager is 0.
I checked the log and seems it fails to initialize:
2020-02-21 23:03:14,202 ERROR org.apache.flink.runtime.taskexecutor.TaskManagerRunner - TaskManager initialization failed.
org.apache.flink.configuration.IllegalConfigurationException: Failed to create TaskExecutorResourceSpec
at org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils.resourceSpec.FromConfig(TaskExecutorResourceUtils.java:72)
at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.startTaskManager(TaskManagerRunner.java:356)
at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.<init>(TaskManagerRunner.java:152)
at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.runTaskManager(TaskManagerRunner.java:308)
at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.lambda$runTaskManagerSecurely$2(TaskManagerRunner.java:322)
at org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30)
at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.runTaskManagerSecurely(TaskManagerRunner.java:321)
at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.main(TaskManagerRunner.java:287)
Caused by: org.apache.flink.configuration.IllegalConfigurationException: The required configuration option Key: 'taskmanager.cpu.cores' , default: null (fallback keys: []) is not set
at org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils.checkConfigOptionIsSet(TaskExecutorResourceUtils.java:90)
at org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils.lambda$checkTaskExecutorResourceConfigSet$0(TaskExecutorResourceUtils.java:84)
at java.util.Arrays$ArrayList.forEach(Arrays.java:3880)
at org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils.checkTaskExecutorResourceConfigSet(TaskExecutorResourceUtils.java:84)
at org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils.resourceSpecFromConfig(TaskExecutorResourceUtils.java:70)
... 7 more
2020-02-21 23:03:14,217 INFO org.apache.flink.runtime.blob.TransientBlobCache - Shutting down BLOB cache
Basically, it looks like a required option 'taskmanager.cpu.cores' is not set. However, I can't find this property in flink-conf.yaml and in the document(https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/config.html) either.
I am using flink 1.10.0. Any help would be highly appreciated!
That configuration option is intended for internal use only -- it shouldn't be user configured, which is why it isn't documented.
The windows start-cluster.bat is failing because of a bug introduced in Flink 1.10. See https://jira.apache.org/jira/browse/FLINK-15925.
One workaround is to use the bash script, start-cluster.sh, instead.
See also this mailing list thread: https://lists.apache.org/thread.html/r7693d0c06ac5ced9a34597c662bcf37b34ef8e799c32cc0edee373b2%40%3Cdev.flink.apache.org%3E

proguard.ParseException: Unknown option '-encryptstrings' in proguard.cfg

When I run mvn install goal with progurad option then am getting the following error. Previously, I don't have this error. I could not find what has made the difference in getting the following error:
proguard.ParseException: Unknown option '-encryptstrings' in line .. of file 'proguard.cfg'
I am using dexguard for my project. is this error because of the maven could not identify the dexguard folder location?
proguard.cfg content:
-dalvik -- unknown option
-android -- unknown option
# Encrypt all strings -- parse exception
-encryptstrings '???*'
The following works with out issues:
-dontusemixedcaseclassnames
-dontskipnonpubliclibraryclasses
-dontpreverify
-verbose
-optimizations !code/simplification/arithmetic
-optimizationpasses 30
-allowaccessmodification
-dontpreverify
-dontoptimize
-ignorewarnings
-renamesourcefileattribute Maviance
-keepattributes SourceFile,LineNumberTable,*Annotation*
-keep,allowshrinking,allowobfuscation class android.support.**Compat* { *; }
The option -encryptstrings '???*' is only supported by DexGuard. So when you use ProGuard to build your application, you will receive such an error.
Thus it is advised to separate the dexguard related configuration into a separate config file dexguard-project.txt that is only included when using DexGuard.
I had the same error using dexguard. The problem was that I was missing this line
proguardFiles getDefaultDexGuardFile('dexguard-debug.pro')
So gradle took Proguard instead of Dexguard, which obviously doesn´t have the encryptstrings feature. So the working release configuration is this:
release {
debuggable true
minifyEnabled true
proguardFiles getDefaultDexGuardFile('dexguard-debug.pro')
signingConfig signingConfigs.release
}

How can I get AppEngine to log info level only for my app?

So I've tried configuring AppEngine logging according to this guide, ensuring I've configured the logging.properties file to be used in web.xml. I've configured logging.properties the following way:
.level = WARNING
nilsnett.chinese.backend.level = INFO
The package name of my logging wrapper is nilsnett.chinese.backend. The problem is that even with this configuration, info-level log output from my app is filtered. Evidence:
I've also tried the following config, which yielded the same result (including the logger class name at the end of the package name):
.level = WARNING
nilsnett.chinese.backend.JavaUtilLogger.level = INFO
To demonstrate that the logging.properties-file is actually read, and that I actually do write info-level logging data to app-engine in this service call, let me show you what happens when I set.level=INFO:
So my desired result is to have INFO and higher-level log outputs from my packages, while other packages, like org.datanucleus, only shows output if WARNING or more severe. In the example above, I want only the two lines marked with the purple star. Am I doing anything wrong?
change your config to:
.level = WARNING
# Set the default logging level for the datanucleus loggers
DataNucleus.JDO.level=WARNING
DataNucleus.Persistence.level=WARNING
DataNucleus.Cache.level=WARNING
DataNucleus.MetaData.level=WARNING
DataNucleus.General.level=WARNING
DataNucleus.Utility.level=WARNING
DataNucleus.Transaction.level=WARNING
DataNucleus.Datastore.level=WARNING
DataNucleus.ClassLoading.level=WARNING
DataNucleus.Plugin.level=WARNING
DataNucleus.ValueGeneration.level=WARNING
DataNucleus.Enhancer.level=WARNING
DataNucleus.SchemaTool.level=WARNING
# FinalizableReferenceQueue tries to spin up a thread and fails. This
# is inconsequential, so don't scare the user.
com.google.common.base.FinalizableReferenceQueue.level=WARNING
com.google.appengine.repackaged.com.google.common.base.FinalizableReferenceQueue.level=WARNING
this is are coming from logging config template, so to set datanucleus to warning you have todo like in this template.
https://developers.google.com/appengine/docs/java/#Logging
and then just add your own logging config:
nilsnett.chinese.backend.level = INFO
this should solve it

ATG catalog export error in startSQLRepository

I want to export the catalog data from atg production. I followed the steps as below.
create FakeXADatasource.properties file in C:\ATG\ATG10.1.1\home\localconfig\atg\dynamo\service\jdbc. (There is mysql user named atguser with password atg123$)
$class=atg.service.jdbc.FakeXADataSource
URL=jdbc:mysql://localhost:3306/prod_lo
user=atguser
password=atg123$
driver=com.mysql.jdbc.Driver
change JTDataSource.properties as below.
$class=atg.service.jdbc.MonitoredDataSource
dataSource=/atg/dynamo/service/jdbc/FakeXADataSource
transactionManager=/atg/dynamo/transaction/TransactionManager
loggingSQLInfo=false
min=10
maxFree=-1
loggingSQLError=false
blocking=true
loggingSQLWarning=false
max=10
loggingSQLDebug=false
then run the "
startSQLRepository.bat -m Store.Storefront -export all
catalogExport.xml -repository /atg/commerce/catalog/ProductCatalog"
command.
but while it processing it gives below error. Anyone know the reason or how to do a complete catalog export? (I have remove the last part of the error log because it exceeds the maximum length of 30000 characters. )
./startSQLRepository -m Store.Storefront -export all catalogExport.xml -repository /atg/commerce/catalog/ProductCatalog
Error:
Error /atg/dynamo/service/jdbc/JTDataSource an exception was
encountered while trying to populate the pool with the starting number
of resources: atg.service.resourcepool.ResourcePoolException:
java.sql.SQLException: Access denied for user 'root'#'localhost'
(using password: NO)
Error /atg/dynamo/service/jdbc/JTDataSource The connection pool failed to initialize propertly, i.e. the starting number of
connections could not be created; check your database accessibility
and JDBC driver configuration
Error /atg/dynamo/service/IdGenerator CONTAINER:atg.service.idgen.IdGeneratorException;
SOURCE:CONTAINER:atg.service.idgen.IdGeneratorException;
SOURCE:java.sql.SQLException:
atg.service.resourcepool.ResourcePoolException: java.sql.SQLException:
Access denied for user 'root'#'localhost' (using password: NO)
Error /atg/dynamo/service/IdGenerator at atg.service.idgen.PersistentIdGenerator.initialize(PersistentIdGenerator.java:389)
Error /atg/dynamo/service/IdGenerator at atg.service.idgen.AbstractSequentialIdGenerator.doStartService(AbstractSequentialIdGenerator.java:643)
try setting max and min poolsizes to 1 and 5
Also make sure your DB is up and running and can be connected to
-DC21
the configuration you are given the startSQLRepository is not taking is at runtime because it is still saying using password no and second error is with you connection pool. my suggestion is for you that try to change only to FakeXADatasource.properties file with username and password. I tried with the same configuration and able to export.

Resources