I submit a job to the hadoop cluster on the flink client, do I need to configure the java home path in flink conf?
If you need to configure,
Should I configure the java home of the client machine or the java home of the hadoop cluster?
When I did not configure java home, I submitted the job error as follows
LogType:jobmanager.err
Log Upload Time:Fri Jan 22 17:27:25 -0800 2021
LogLength:160
Log Contents:
Unrecognized VM option 'MaxMetaspaceSize=268435456'
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
According the output you posted, you problem is not java_path (yet!)
The line
Unrecognized VM option 'MaxMetaspaceSize=268435456'
You need to remove MaxMetaspaceSize from configuration or replace it with: -XX:MaxMetaspaceSize
you can get help from this issue.
Related
I've just upgraded my flink from version 1.9.1 to 1.11.2 (using docker)
I have already many flink jobs running in version 1.9.1
When I try to upgrade to 1.11.1 and re run my job, it shows error.
2020-11-12 06:49:17,731 WARN org.apache.zookeeper.ClientCnxn []
- SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/tmp/jaas-1135609831848314731.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it.
2020-11-12 06:49:17,739 INFO org.apache.zookeeper.ClientCnxn [] - Opening socket connection to server xxxxxx:2181
2020-11-12 06:49:17,741 ERROR org.apache.curator.ConnectionState [] - Authentication failed
And this is the error after deploying my flink job:
Caused by: java.lang.RuntimeException: API paths not defined
and also:
java.lang.NoSuchMethodError: org.apache.flink.api.common.state.OperatorStateStore.getSerializableListState(Ljava/lang/String;)Lorg/apache/flink/api/common/state/ListState;
Do I need to change every pom for my flink jobs?
Is there any work around without changing my source code?
Thanks
Yes, you do have to rebuild your Flink jobs whenever you update the Flink version being used to run them. The libraries you use should be from the same exact version used by the Job Manager and Task Managers.
If you are trying to automate deployments for a CI/CD pipeline, you could inject the version number into the pom.xml using an environment variable -- but doing things like that can make it hard to debug when things go wrong.
I am attempting to restart my local eclipse che server after editing some configuration. I attempted to run chectl server:stop, but got this error:
» Error: E_SHUTDOWN_CHE_SERVER_FAIL - Failed to shutdown Eclipse Che server. Login context is not set. Please login
» first.
So I attempted to login with chectl auth:login, but was again presented with an error:
Using https://che-che.169.254.109.208.nip.io/api server API URL to log in
Error: Command failed with exit code 1: oc status
error: you do not have rights to view project "default" specified in your config or the project doesn't exist
I've looked through the documentation, and couldn't find how to create a "default" project
I used chectl server:deploy --platform=docker-desktop to start my server.
I have tried other methods of deploying che, but it only worked when using Docker desktop without helm.
I am using Windows 10 home, and deploying it using Docker desktop (Engine v19.03.13) and kubernetes version v1.19.3.
Edit: I have filed a bug report on github: https://github.com/eclipse/che/issues/18355
Currently trying to open the file gives this error:
C:\Users\....\....\apache-tomcat-8.0.45\logs>catalina.out
The process cannot access the file because it is being used by another process.
What I have done is to have application running in the webapps and start tomcat by using following command:
catalina.bat jpda start
And now I want to see the logs in windows. In Ubuntu, tail -f catalina.out can be used. But how to see tomcat logs in windows forcefully?
As stated in answers here, you can try more catalina.out or type catalina.out
I'm using Worklight 6.2 server edition and I can't deploy a working runtime (of other environments) on my server.
I'm using webpshere liberty profile v8.5.5 and when I deploy the runtime via GUI it says success and on server.xml I can see the new configuration for the app.
However when I go to the worklightconsole I don't see my runtime to upload the app.
On messages.log there is a error regarding JMX connection.
The quoted error is
Failed to obtain JMX connection to access an MBean. There might be a JMX configuration error: No JMX connector is configured
I'm refering this because I've seen some post on SO saying that these issues might be connected. However I have the restConnector-1.0 on my WLP features.
Reference: No runtime on my Worklight 6.2 Console after installing analytics
On messages.log there is some other things that I found interesting, like the correct start of the runtime I've deployed
[11/12/14 5:50:45:177 CST] 00000012 com.worklight.server.bundle.project.JeeProjectActivator I FWLST0002I: ========= Project /HelloWorld started. The project WAR file version is 6.2.0.00.20140922-2259,running on server version 6.2.0.00.20140613-0730. [project HelloWorld]
and two erros while starting my server
[11/12/14 5:50:49:911 CST] 00000012 SystemErr R 24 WorklightPU WARN [Scheduled Executor-thread-1] openjpa.Runtime - An error occurred while registering a ClassTransformer with PersistenceUnitInfo: name 'WorklightPU', root URL [file:/opt/IBM/WebSphere/Liberty/usr/shared/resources/worklight/lib/worklight-jee-library.jar]. The error has been consumed. To see it, set your openjpa.Runtime log level to TRACE. Load-time class transformation will not be available.
Second error:
java.lang.RuntimeException: Timeout while waiting for the management service to start up
I don't know what these are but I think it might be related to my problem and this errors eventually appear when I start my server.
Does anyone have any tips for troubleshooting this issue?
Thanks in advance.
This is a known issue from Websphere.
There is a APAR to fix that, a workaround is to restart the server with the --clean option to force a refresh onto the shared libraries.
http://www-01.ibm.com/support/docview.wss?uid=swg1PI17830
I work on a folder and tomcat recognizes the folder. But, when I shutdown and restart tomcat it is taking time to recognize the same folder. Can anybody tell me why?
I see the error report in catalina.out. It gives a list of errors but finally says
Nov 22, 2009 2:08:58 PM org.apache.catalina.startup.Catalina start
INFO: Server startup in 1403 ms
I pressume 'folder' is your webapp. I takes time starting your wabapp, because tomcat has to load.parse configuration files for that app. Additionaly if your webapp contains JSP pages those get most likely recompiled on the restart.
I guess you are referring to application deployment time. The most time consuming process is to scan the jars for TLDs. If you don't use JSP tags, you can speed up the deployment time by adding this to your context,
<Context processTlds="false" ... />