Kylin Build Cube failed somtimes at" #19 Step Name: Hive Cleanup" java.lang.RuntimeException: Failed to read kylin_hive_conf.xml - cube

The error occurs sometimes ,and after reboot kylin(kylin.sh stop and then kylin.sh start), it will find the conf dir location and pass this step.
I am using Kylin version "2.6.2", and KYLIN_CONF="/opt/kylin/conf" is already set correctly.
The errors hints are different , as i have countered the following:
1.
java.lang.RuntimeException: Failed to read kylin_hive_conf.xml at '/opt/apache-kylin-2.6.2-bin-hadoop3/bin/meta/kylin_hive_conf.xml'
at org.apache.kylin.common.util.SourceConfigurationUtil.loadXmlConfiguration(SourceConfigurationUtil.java:88)
at org.apache.kylin.common.util.SourceConfigurationUtil.loadHiveConfiguration(SourceConfigurationUtil.java:61)
at org.apache.kylin.common.util.HiveCmdBuilder.<init>(HiveCmdBuilder.java:48)
at org.apache.kylin.source.hive.GarbageCollectionStep.cleanUpIntermediateFlatTable(GarbageCollectionStep.java:63)
at org.apache.kylin.source.hive.GarbageCollectionStep.doWork(GarbageCollectionStep.java:49)
at org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:167)
at org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:71)
at org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:167)
at org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:114)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2.
java.lang.RuntimeException: Failed to read kylin_hive_conf.xml at '/opt/apache-kylin-2.6.2-bin-hadoop3/bin/meta/kylin_hive_conf.xml'
3.
java.lang.RuntimeException: Failed to read kylin_hive_conf.xml at '/opt/apache-kylin-2.6.2-bin-hadoop3/conf/meta/kylin_hive_conf.xml'
who can kindly help me find the root cause and fix this problem ?
Thanks in advance.

I hope you have already solve the issue. I had encounter same problem and investigated about it.
Prefer to https://github.com/apache/kylin/blob/kylin-2.6.2/engine-mr/src/main/java/org/apache/kylin/engine/mr/common/AbstractHadoopJob.java#L481
When we use MapReduce, KYLIN_CONF will be set to different folder.
System.setProperty(KylinConfig.KYLIN_CONF, metaDir.getAbsolutePath());
I think to workaround with it, we have to create simple link for all XML xml configurations.
Try to check your Kylin log
cat YOUR_PATH/apache-kylin-2.6.3-bin-hbase1x/logs/kylin.log | grep "The absolute path"
You possibly see the result
2019-10-14 23:47:04,438 INFO [LocalJobRunner Map Task Executor #0] common.AbstractHadoopJob:482 : The absolute path for meta dir is /SOME_FOLDER/meta

Related

Renaming package causes crash; java.lang.UnsatisfiedLinkError: No implementation found for void

i tried to rename my package and the app crashes at the startup. This is the error measage, can someone plase help and tell me how can I fix it?
FATAL EXCEPTION: SDLThread
Process: com.MYAPP.demo, PID: 3967
java.lang.UnsatisfiedLinkError: No implementation found for void com.MYAPP.demo.MainActivity.initEnv() (tried Java_com_MYAPP_demo_MainActivity_initEnv and Java_com_MYAPP_demo_MainActivity_initEnv__)
at com.MYAPP.demo.MainActivity.initEnv(Native Method)
at com.MYAPP.demo.MainActivity._initEnv(MainActivity.java:71)
at org.libsdl.app.SDLMain.run(SDLActivity.java:1679)
at java.lang.Thread.run(Thread.java:919)

Script abort because of a possible infinite loop

I have followed this instructions to setup the latest version of NEOS:
https://docs.neos.io/cms/installation-development-setup/manual-installation-with-a-web-server
https://docs.neos.io/cms/installation-development-setup/running-the-setup-tool
After i log into the CMS i get the following error:
Xdebug has detected a possible infinite loop, and aborted your script with a stack depth of '256' frames
Exception Code 0
Exception Type Error
Thrown in File Packages/Framework/Neos.Flow/Classes/ObjectManagement/ObjectManager.php
Line 539
My local environment:
PHP Version 7.3.26-1+ubuntu20.04.1+deb.sury.org+1
XDebug 3.0.2
what's going wrong here?
this solves my problem
20-xdebug.ini
[xdebug]
xdebug.max_nesting_level=512
php.ini
max_execution_time = 600
This solves my problem:
"Xdebug has detected a possible infinite loop, and aborted your script with a stack depth of '256' frames"
Add to file:
/etc/php/8.1/apache2/conf.d/20-xdebug.ini
xdebug.max_nesting_level=512
And restart the server

Cause: Command execution failed on the local server with non-zero exit code

Failed to fetch information from target servers
Cause: Command execution failed on the local server with non-zero exit code.
command: /usr/local/psa/bin/ipmanage --xml-info
exit code: 255
stdout: <ipinfo>
<ip name="193.160.214.57">
<state>0</state>
<type>shared</type>
<ip_address>193.160.214.57</ip_address>
<mask>255.255.255.255</mask>
<iface>venet0</iface>
<clients>0</clients>
<hostings>0</hostings>
<ftps>false</ftps>
<publicIp></publicIp>
</ip>
</ipinfo>
stderr: [2019-10-20 21:21:51.133] ERR [util_exec] proc_close() failed ['/usr/local/psa/admin/bin/f2bmng' '--reload'] with exit code [1]
PHP Fatal error: Uncaught PleskUtilException: f2bmng failed: 2019-10-20 21:21:51,115 fail2ban.jailreader [17670]: ERROR No file(s) found for glob /var/log/secure
2019-10-20 21:21:51,115 fail2ban [17670]: ERROR Failed during configuration: Have not found any log file for ssh jail
ERROR:__main__:Command '['/usr/bin/fail2ban-client', 'reload']' returned non-zero exit status 255 in /usr/local/psa/admin/plib/Service/Agent.php:210
Stack trace:
#0 /usr/local/psa/admin/plib/Ip/Ban/Manager.php(490): Service_Agent->execAndGetResponse('f2bmng', Array, '')
#1 /usr/local/psa/admin/plib/Ip/Ban/Manager.php(458): Ip_Ban_Manager->_callUtility('--reload')
#2 /usr/local/psa/admin/plib/Fail2Ban/EventListener.php(123): Ip_Ban_Manager->reload()
#3 [internal function]: Plesk\Fail2Ban\EventListener->applyChanges()
#4 {main}
thrown in /usr/local/psa/admin/plib/Service/Agent.php on line 210
That is a critical error, migration was stopped.
I don't know what is "wrong" with your plesk (not so familiar with), but fail2ban error is pretty simply:
ERROR No file(s) found for glob /var/log/secure
2019-10-20 21:21:51,115 fail2ban [17670]: ERROR Failed during configuration: Have not found any log file for ssh jail
Your ssh jail seems to be configured to monitor /var/log/secure which is not exist. Either you have to specify proper logpath (/var/log/auth.log?) where ssh logs authentication errors;
or if it is systemd journal on your system, you have to specify backend = systemd for that.
Related fail2ban jail.local would be:
[ssh]
# backend = systemd
logpath = /var/log/auth.log
But you can surely configure this in plesk settings too.
Also note your jail is called ssh, where normally original default jail of fail2ban is sshd (but it could be indeed configured with this name from your maintainer).

Camel-JCFIS large file 94MB fails

I am copying file using camel-jcifs. When the files are large it started failing. Below are the options I am passing
smb://server/Reports?bufferSize=4280&delay=60000&delete=true&include=.*.xls&localWorkDirectory=/tmp&moveFailed=.failed&readLock=changed&readLockCheckInterval=60000&readLockLoggingLevel=WARN&readLockMinLength=0&readLockTimeout=3600000
Error message :
Caused by: jcifs.smb.SmbException: Transport1 timedout waiting for response to SmbComWriteAndX[command=SMB_COM_WRITE_ANDX
Any help is much appreciated.

Clearcase error while merging code

I am not able to merge the code in regression branch for a specific package
The error is:
merge: Error: *** No Automatic Decision Possible
merge: Error: *** Aborting...
merge: Error: Unable to remove "C:\TEMP\tmp22349": Permission denied.
Directory merges were necessary and -depth was specified
Unable to evaluate all possible merge candidates
How to avoid this error?
please find attached screen for error.
Regarding "Unable to remove "C:\TEMP\tmp22349": Permission denied.", launch a process explorer and search (Ctrl+F) the process which keeps an handle on that resource: kill that or those processes.
Then delete the file. And try the merge -abort one more time.

Resources