WLP :: Worklight :: Can't install runtime - worklight-server

I'm using Worklight 6.2 server edition and I can't deploy a working runtime (of other environments) on my server.
I'm using webpshere liberty profile v8.5.5 and when I deploy the runtime via GUI it says success and on server.xml I can see the new configuration for the app.
However when I go to the worklightconsole I don't see my runtime to upload the app.
On messages.log there is a error regarding JMX connection.
The quoted error is
Failed to obtain JMX connection to access an MBean. There might be a JMX configuration error: No JMX connector is configured
I'm refering this because I've seen some post on SO saying that these issues might be connected. However I have the restConnector-1.0 on my WLP features.
Reference: No runtime on my Worklight 6.2 Console after installing analytics
On messages.log there is some other things that I found interesting, like the correct start of the runtime I've deployed
[11/12/14 5:50:45:177 CST] 00000012 com.worklight.server.bundle.project.JeeProjectActivator I FWLST0002I: ========= Project /HelloWorld started. The project WAR file version is 6.2.0.00.20140922-2259,running on server version 6.2.0.00.20140613-0730. [project HelloWorld]
and two erros while starting my server
[11/12/14 5:50:49:911 CST] 00000012 SystemErr R 24 WorklightPU WARN [Scheduled Executor-thread-1] openjpa.Runtime - An error occurred while registering a ClassTransformer with PersistenceUnitInfo: name 'WorklightPU', root URL [file:/opt/IBM/WebSphere/Liberty/usr/shared/resources/worklight/lib/worklight-jee-library.jar]. The error has been consumed. To see it, set your openjpa.Runtime log level to TRACE. Load-time class transformation will not be available.
Second error:
java.lang.RuntimeException: Timeout while waiting for the management service to start up
I don't know what these are but I think it might be related to my problem and this errors eventually appear when I start my server.
Does anyone have any tips for troubleshooting this issue?
Thanks in advance.

This is a known issue from Websphere.
There is a APAR to fix that, a workaround is to restart the server with the --clean option to force a refresh onto the shared libraries.
http://www-01.ibm.com/support/docview.wss?uid=swg1PI17830

Related

Upgrading Apache Flink need to update pom.xml?

I've just upgraded my flink from version 1.9.1 to 1.11.2 (using docker)
I have already many flink jobs running in version 1.9.1
When I try to upgrade to 1.11.1 and re run my job, it shows error.
2020-11-12 06:49:17,731 WARN org.apache.zookeeper.ClientCnxn []
- SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/tmp/jaas-1135609831848314731.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it.
2020-11-12 06:49:17,739 INFO org.apache.zookeeper.ClientCnxn [] - Opening socket connection to server xxxxxx:2181
2020-11-12 06:49:17,741 ERROR org.apache.curator.ConnectionState [] - Authentication failed
And this is the error after deploying my flink job:
Caused by: java.lang.RuntimeException: API paths not defined
and also:
java.lang.NoSuchMethodError: org.apache.flink.api.common.state.OperatorStateStore.getSerializableListState(Ljava/lang/String;)Lorg/apache/flink/api/common/state/ListState;
Do I need to change every pom for my flink jobs?
Is there any work around without changing my source code?
Thanks
Yes, you do have to rebuild your Flink jobs whenever you update the Flink version being used to run them. The libraries you use should be from the same exact version used by the Job Manager and Task Managers.
If you are trying to automate deployments for a CI/CD pipeline, you could inject the version number into the pom.xml using an environment variable -- but doing things like that can make it hard to debug when things go wrong.

Collecting Metrics with Graphite Plugin leads to "A metric named [..] already exists" error

when i configure the flink-conf.yaml to collect metrics with the graphite plugin,
the most time only incomplete metrics are being sent. On the Taskmanager output multiple errors occur like:
2018-08-15 00:58:59,016 WARN org.apache.flink.runtime.metrics.MetricRegistryImpl - Error while registering metric.
java.lang.IllegalArgumentException: A metric named mycomputer.taskmanager.8ceab4c3dfbf9fc5fa2af0447f1373a1.State machine job.Source: Custom Source.0.numRecordsOut already exists
at com.codahale.metrics.MetricRegistry.register(MetricRegistry.java:91)
at org.apache.flink.dropwizard.ScheduledDropwizardReporter.notifyOfAddedMetric(ScheduledDropwizardReporter.java:131)
at org.apache.flink.runtime.metrics.MetricRegistryImpl.register(MetricRegistryImpl.java:329)
at org.apache.flink.runtime.metrics.groups.AbstractMetricGroup.addMetric(AbstractMetricGroup.java:379)
at org.apache.flink.runtime.metrics.groups.AbstractMetricGroup.counter(AbstractMetricGroup.java:312)
at org.apache.flink.runtime.metrics.groups.AbstractMetricGroup.counter(AbstractMetricGroup.java:302)
at org.apache.flink.runtime.metrics.groups.OperatorIOMetricGroup.<init>(OperatorIOMetricGroup.java:41)
at org.apache.flink.runtime.metrics.groups.OperatorMetricGroup.<init>(OperatorMetricGroup.java:48)
at org.apache.flink.runtime.metrics.groups.TaskMetricGroup.addOperator(TaskMetricGroup.java:146)
at org.apache.flink.streaming.api.operators.AbstractStreamOperator.setup(AbstractStreamOperator.java:174)
at org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.setup(AbstractUdfStreamOperator.java:82)
at org.apache.flink.streaming.runtime.tasks.OperatorChain.<init>(OperatorChain.java:143)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:267)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:711)
at java.lang.Thread.run(Thread.java:748)
I've tried this on a completely freshly prepared flink-1.6.0 release with following config and the precompiled "State machine job" in the examples folder:
metrics.reporters: grph
metrics.reporter.grph.class: org.apache.flink.metrics.graphite.GraphiteReporter
metrics.reporter.grph.host: localhost
metrics.reporter.grph.port: 2003
metrics.reporter.grph.interval: 1 SECONDS
metrics.reporter.grph.protocol: TCP
I use the official graphite docker image (https://hub.docker.com/r/graphiteapp/docker-graphite-statsd/) that is running on the default configuration.
Has anybody an idea, how i can fix this issue?
Thank's and best regards
update
to exclude that a specific local setting is responsible for this behaviour, I repeated the process on a clean EC2 instance. There's exactly the same error here.
How to reproduce:
start EC2 t2.xlarge
installed java
download flink at https://www.apache.org/dyn/closer.lua/flink/flink-1.6.0/flink-1.6.0-bin-scala_2.11.tgz
added the flink-metrics-graphite-1.6.0.jar to lib
configured the flink-yaml.conf as mentioned in my previous post
./bin/start-cluster.sh
./bin/flink run examples/streaming/StateMachineExample.jar
I have not set up graphite in this case, because the error obviously already
occurs before.
After the job has been started you can view the error in the flink dashboard under Task Manager -> Logs

JDBC Driver has been forcibly unregistered by Tomcat 6

I am getting this error about mysql:
INFO: Shutting down log4j
09-ene-2014 9:57:21 org.apache.catalina.loader.WebappClassLoader clearReferencesJdbc
GRAVE: A web application registered the JBDC driver [com.mysql.jdbc.Driver] but failed to unregister it when the web application was stopped. To prevent a memory leak, the JDBC Driver has been forcibly unregistered.
09-ene-2014 9:57:21 org.apache.catalina.loader.WebappClassLoader clearReferencesThreads
GRAVE: A web application appears to have started a thread named [Abandoned connection cleanup thread] but has failed to stop it. This is very likely to create a memory leak.
There're a lot information about it in others questions. I already updated my mysql lib to the version 5.1.28 (before, I was working with 5.0.8 which contains a bug about it) but it doesn't still work. I have checked that I have that lib in my war. Like a test, I deleted my mysql lib from WEB-INF/lib and I copied to TOMCAT/lib and I missed that error, why?. I don't want to copy that jar in TOMCAT/lib. I'm working with Tomcat6.

appcfg.py migrate_traffic using app engine modules

I'm attempting to use appcfg.py migrate_traffic, but I get a 500 error when using this with a module. The documentation states that a module can be specified:
The 'migrate_traffic' command gradually gradually sends an increasing fraction
of traffic your app's traffic from the current default version to another
version. Once all traffic has been migrated, the new version is set as the
default version.
app.yaml specifies the target application, version, and (optionally) module; use
the --application, --version and --module flags to override these values.
Can be thought of as an enhanced version of the 'set_default_version'
command.
If I try this with a module, I get the following error
Error 500: --- begin server output ---
Server Error (500)
A server error has occurred.
--- end server output ---
The source at https://code.google.com/p/googleappengine/source/browse/trunk/python/google/appengine/tools/appcfg.py?r=396 (MigrateTraffic) doesn't seem to use the module at all. Is this a bug in appcfg.py or a missing feature of app engine?
I filed a ticket with google support. The answer is that modules are not supported with migrate_traffic yet. No ETA on when they might be supported. I think the current version of appcfg.py doesn't mention modules as prominently in the help for migrate_traffic as well.

Error while starting a GAE/GWT project: Unable to restore the previous TimeZone

When I try to run a GWT App Engine project using the Eclipse plugin, I get the following error:
Initializing App Engine server
[ERROR] Unable to start App Engine server
java.lang.RuntimeException: Unable to restore the previous TimeZone
at com.google.appengine.tools.development.DevAppServerImpl.restoreLocalTimeZone(DevAppServerImpl.java:228)
at com.google.appengine.tools.development.DevAppServerImpl.start(DevAppServerImpl.java:164)
at com.google.appengine.tools.development.gwt.AppEngineLauncher.start(AppEngineLauncher.java:97)
at com.google.gwt.dev.DevMode.doStartUpServer(DevMode.java:509)
at com.google.gwt.dev.DevModeBase.startUp(DevModeBase.java:1068)
at com.google.gwt.dev.DevModeBase.run(DevModeBase.java:811)
at com.google.gwt.dev.DevMode.main(DevMode.java:311)
Caused by: java.lang.NoSuchFieldException: defaultZoneTL
at java.lang.Class.getDeclaredField(Class.java:1899)
at com.google.appengine.tools.development.DevAppServerImpl.restoreLocalTimeZone(DevAppServerImpl.java:222)
... 6 more
[ERROR] shell failed in doStartupServer method
Unable to start embedded HTTP server
com.google.gwt.core.ext.UnableToCompleteException: (see previous log entries)
at com.google.appengine.tools.development.gwt.AppEngineLauncher.start(AppEngineLauncher.java:102)
at com.google.gwt.dev.DevMode.doStartUpServer(DevMode.java:509)
at com.google.gwt.dev.DevModeBase.startUp(DevModeBase.java:1068)
at com.google.gwt.dev.DevModeBase.run(DevModeBase.java:811)
at com.google.gwt.dev.DevMode.main(DevMode.java:311)
Chris Cashwell provided the correct answer. But for people like myself who are relatively new to Eclipse, here are more explicit instructions (which I came across here):
Right-click project directory in Project Explorer window
Select Run As > Run Configurations...
Go to Arguments tab
In VM Arguments textbox, add one of the following parameters mentioned by Chris:
-Dappengine.user.timezone.impl=UTC (this worked in my case)
-Dappengine.user.timezone=UTC
Click Apply then Run
In my case, this was done specifically in the context of a PlayN project I am working on, so I was right-clicking the HTML folder. In the end, my VM arguments looked something like this:
-Xmx512m -javaagent:/long/path/to/appengine-agent.jar -Dappengine.user.timezone.impl=UTC
See this bug report. For me, it was fixed by downgrading the JDK from 1.7.0_03 -> 1.7.0_02. Other things that have been purported to work are adding -Dappengine.user.timezone=UTC (or in some cases -Dappengine.user.timezone.impl=UTC) to the JVM flags.
i got this error, and found port already in use in the console.
I closed eclipse and killed javaw.exe. Then everything worked fine.

Resources