Camel-JCFIS large file 94MB fails - apache-camel

I am copying file using camel-jcifs. When the files are large it started failing. Below are the options I am passing
smb://server/Reports?bufferSize=4280&delay=60000&delete=true&include=.*.xls&localWorkDirectory=/tmp&moveFailed=.failed&readLock=changed&readLockCheckInterval=60000&readLockLoggingLevel=WARN&readLockMinLength=0&readLockTimeout=3600000
Error message :
Caused by: jcifs.smb.SmbException: Transport1 timedout waiting for response to SmbComWriteAndX[command=SMB_COM_WRITE_ANDX
Any help is much appreciated.

Related

Error on iOS14 when loading OBJs into MDLAsset

When loading OBJs into an MDLAsset using the MDLAsset(url:) initializer (to eventually get the model into SceneKit), the operation fails frequently and inconsistently on iOS14. This operation works fine for these same files on previous iOS versions. I've also observed the bug on iPadOS, although maybe less frequently. Not sure if it's relevant, but these OBJs are pulled from server and stored locally. But this bug is occurring after files are already downloaded. Sometimes the same file will fail multiple times before randomly working, and vice versa.
The console output seems to indicate a failure to communicate with ModelIO XPC service. I tried restarting my device, but the bug continues to occur. Console output:
connection to com.apple.ModelIO.AssetLoader was interrupted
AssetLoader.loadURL errorHandler: Error Domain=NSCocoaErrorDomain Code=4097 "connection to service on pid 0 named com.apple.ModelIO.AssetLoader" UserInfo={NSDebugDescription=connection to service on pid 0 named com.apple.ModelIO.AssetLoader}
Couldn’t communicate with a helper application.
connection to com.apple.ModelIO.AssetLoader was interrupted
Has anyone else run into this issue on iOS14?
Alternatively, are there any workarounds anyone has tried in the meantime? As far as I know, loading an OBJ (that is downloaded from server) into SceneKit can only be done through ModelIO, without writing an OBJ parser myself.
This seems to be fixed in 14.3.
2020-10-13 18:31:36.989282+0300 Studia3D Viewer[1452:348335] connection to com.apple.ModelIO.AssetLoader was interrupted
2020-10-13 18:31:36.989368+0300 Studia3D Viewer[1452:347676] AssetLoader.loadURL errorHandler: Error Domain=NSCocoaErrorDomain Code=4097 “connection to service on pid 0 named com.apple.ModelIO.AssetLoader” UserInfo={NSDebugDescription=connection to service on pid 0 named com.apple.ModelIO.AssetLoader}
2020-10-13 18:31:36.989404+0300 Studia3D Viewer[1452:348332] connection to com.apple.ModelIO.AssetLoader was interrupted
2020-10-13 18:31:36.997352+0300 Studia3D Viewer[1452:347676] Не удалось установить связь с приложением-помощником.
The same thing happens with local files
No solution yet

Kylin Build Cube failed somtimes at" #19 Step Name: Hive Cleanup" java.lang.RuntimeException: Failed to read kylin_hive_conf.xml

The error occurs sometimes ,and after reboot kylin(kylin.sh stop and then kylin.sh start), it will find the conf dir location and pass this step.
I am using Kylin version "2.6.2", and KYLIN_CONF="/opt/kylin/conf" is already set correctly.
The errors hints are different , as i have countered the following:
1.
java.lang.RuntimeException: Failed to read kylin_hive_conf.xml at '/opt/apache-kylin-2.6.2-bin-hadoop3/bin/meta/kylin_hive_conf.xml'
at org.apache.kylin.common.util.SourceConfigurationUtil.loadXmlConfiguration(SourceConfigurationUtil.java:88)
at org.apache.kylin.common.util.SourceConfigurationUtil.loadHiveConfiguration(SourceConfigurationUtil.java:61)
at org.apache.kylin.common.util.HiveCmdBuilder.<init>(HiveCmdBuilder.java:48)
at org.apache.kylin.source.hive.GarbageCollectionStep.cleanUpIntermediateFlatTable(GarbageCollectionStep.java:63)
at org.apache.kylin.source.hive.GarbageCollectionStep.doWork(GarbageCollectionStep.java:49)
at org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:167)
at org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:71)
at org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:167)
at org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:114)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2.
java.lang.RuntimeException: Failed to read kylin_hive_conf.xml at '/opt/apache-kylin-2.6.2-bin-hadoop3/bin/meta/kylin_hive_conf.xml'
3.
java.lang.RuntimeException: Failed to read kylin_hive_conf.xml at '/opt/apache-kylin-2.6.2-bin-hadoop3/conf/meta/kylin_hive_conf.xml'
who can kindly help me find the root cause and fix this problem ?
Thanks in advance.
I hope you have already solve the issue. I had encounter same problem and investigated about it.
Prefer to https://github.com/apache/kylin/blob/kylin-2.6.2/engine-mr/src/main/java/org/apache/kylin/engine/mr/common/AbstractHadoopJob.java#L481
When we use MapReduce, KYLIN_CONF will be set to different folder.
System.setProperty(KylinConfig.KYLIN_CONF, metaDir.getAbsolutePath());
I think to workaround with it, we have to create simple link for all XML xml configurations.
Try to check your Kylin log
cat YOUR_PATH/apache-kylin-2.6.3-bin-hbase1x/logs/kylin.log | grep "The absolute path"
You possibly see the result
2019-10-14 23:47:04,438 INFO [LocalJobRunner Map Task Executor #0] common.AbstractHadoopJob:482 : The absolute path for meta dir is /SOME_FOLDER/meta

Error uploading files CKAN

I am trying to upload data to CKAN, and I can do that for smaller files (I have uploaded 4 kB successfully), however for bigger files (with a file of 18 MB I already got this error), I get Error 500 An internal server error occurred.
In the command prompt where I am running CKAN I get
Error - <type 'exceptions.WindowsError'>: [Error 32] The file is already being used by another process: u'C:\\src\\ckan\\ckan\\resources\\a3d\\19a\\ba-7f3f-42fc-
a02e-09f50aae0924~'
URL: http://localhost:5000/dataset/new_resource/test1
I don't know what that file is, but I am pretty sure this error is the reason why I can't upload larger files, as it is the only error I get.
Important to say that I can successfully add resources from URL and from small files, but when trying with larger files, I get this error.
Does anyone have any idea on what could be wrong here?
Many thanks!
I can't explain that Windows-Error, but generally CKAN has an upload size limit of 10MB for resources by default. You can raise that in your ini with ckan.max_resource_size = XX, for example ckan.max_resource_size = 100 (which means = 100MB).

Undocumented Managed VM task queue RPCFailedError

I'm running into a very peculiar and undocumented issue with a GAE Managed VM and Task Queues. I understand that the Managed VM service is in beta, so this question may not be relevant forever, but it's definitely causing me lots of headache now.
The main symptom of the issue is that, in certain (not completely known to me) circumstances, I'm seeing the following error/traceback:
File "/home/vmagent/my_app/some_file.py", line 265, in some_ndb_tasklet
res = yield some_task.add_async('some-task-queue-name')
File "/home/vmagent/python_vm_runtime/google/appengine/ext/ndb/tasklets.py", line 472, in _on_rpc_completion
result = rpc.get_result()
File "/home/vmagent/python_vm_runtime/google/appengine/api/apiproxy_stub_map.py", line 613, in get_result
return self.__get_result_hook(self)
File "/home/vmagent/python_vm_runtime/google/appengine/api/taskqueue/taskqueue.py", line 1948, in ResultHook
rpc.check_success()
File "/home/vmagent/python_vm_runtime/google/appengine/api/apiproxy_stub_map.py", line 579, in check_success
self.__rpc.CheckSuccess()
File "/home/vmagent/python_vm_runtime/google/appengine/ext/vmruntime/vmstub.py", line 312, in _WaitImpl
raise self._ErrorException(*_DEFAULT_EXCEPTION)
RPCFailedError: The remote RPC to the application server failed for call taskqueue.BulkAdd().
I've gone through my local App Engine SDK to trace this through, and I can get up to the last line of the trace, but google/appengine/ext/vmruntime/ doesn't exist on my machine at all, so I have no idea what's happening in vmstub.py. From looking at the local code, some_task.add_async('the-queue') is spinning up an RPC and waiting for it to finish, but this error is not what the except apiproxy_errors.ApplicationError, e: at line 1949 of taskqueue.py is expecting...
The code that's generating the error looks something like this:
#ndb.tasklet
def kickoff_tasks(batch_of_payloads):
for task_payload in batch_of_payloads:
# task_payload is a dict
task = taskqueue.Task(
url='/the/handler/url',
params=payload)
res = yield task.add_async('some-valid-task-queue-name')
Other things worth noting:
this code itself is running in a task handler kicked off by another task.
I first saw this error before implementing this sort of batching, and assumed the issue was because I had added too many tasks from within a task handler.
In some cases, I can run this successfully with a batch size of 100, but in others, it fails consistently (depending on the data in the payloads) at 100, and sometimes succeeds at batch sizes of 50.
The task payloads themselves include batches of items, and are tuned to be just small enough to fit in a task. App Engine advertises a maximum task size of 100KB, so I'm keeping the payloads to under 90,000 bytes right now. Lowering the size even more doesn't seem to help any.
I've also tried implementing an exponential backoff to retry the kickoff_tasks method when this error appears, but it seems that once the error is raised, I can't add any other tasks at all from within the same handler (i.e. I can't kickoff a "continue from where you left off" task, I just have to let this one fail and restart itself)
So, my question is, what is actually causing this error? How can I avoid it, or fix this so that I'm handling it correctly?
This is a known issue that is being worked on. There are actually two issues - the RPC failure itself and the lack of handling of the RPCFailedError exception by the SDK.
There is some public discussion of the issue here.
If you're using App Engine Flexible and the python-compat-multicore image, a new bug popped up related to App Engine using a newer version of the requests library that broke the communication between App Engine Flexible and the datastore. You can fix this error by monkey patching the library in your appengine_config.py file.
Add the following code to appengine_config.py:
try:
import appengine.ext.vmruntime.vmstub as vmstub
except ImportError:
pass
else:
if isinstance(vmstub.DEFAULT_TIMEOUT, (int, long)):
# Newer requests libraries do not accept integers as header values.
# Be sure to convert the header value before sending.
# See Support Case ID 11235929.
vmstub.DEFAULT_TIMEOUT = bytes(vmstub.DEFAULT_TIMEOUT)
Note that if you do not have an appengine_config.py file, you can just create it in your base project directory (wherever you put your app.yaml file). This file gets run during App Engine startup..

Erlang file:consult and system_limit error

I use file:consult function to read data from config file of my application. But some times i got error: {badmatch,{error,system_limit}} How can i avoid this?
I read about ERL_MAX_PORTS, but i haven't this variable:
echo $ERL_MAX_PORTS
i got empty string. How i can correctly set ERL_MAX_PORTS or can i find other methods to avoid this error?
Thank you

Resources