Error saving a string longer than 1500 bytes in Datastore from Dataflow api - google-app-engine

Dataflow job is throwing this error message when I try to save a very long string: The value of property "myProperty" is longer than 1500 bytes., code=INVALID_ARGUMENT.
There is an error when following Google's DatastoreWordCount sample and saving a string longuer then 1500 bytes.
I know that when using Datastore API, I am able to save strings that are longer than 1500 bytes by saving the property as com.google.appengine.api.datastore.Text. However, There is no alternative in DatastoreWordCount sample or in DatastoreHelper class documentation that could indicate that Text type is supported.
Could be a way to save such long strings using that api so that it could be read as com.google.appengine.api.datastore.Text?
The full error message is as follow:
java.lang.RuntimeException: com.google.cloud.dataflow.sdk.util.UserCodeException: java.lang.RuntimeException: com.google.cloud.dataflow.sdk.util.UserCodeException: java.lang.RuntimeException: com.google.cloud.dataflow.sdk.util.UserCodeException: java.lang.RuntimeException: com.google.cloud.dataflow.sdk.util.UserCodeException: com.google.datastore.v1.client.DatastoreException: The value of property "dalekTestExecutions" is longer than 1500 bytes., code=INVALID_ARGUMENT
at com.google.cloud.dataflow.sdk.runners.worker.SimpleParDoFn$1.output(SimpleParDoFn.java:162)
at com.google.cloud.dataflow.sdk.util.DoFnRunnerBase$DoFnContext.outputWindowedValue(DoFnRunnerBase.java:288)
at com.google.cloud.dataflow.sdk.util.DoFnRunnerBase$DoFnContext.outputWindowedValue(DoFnRunnerBase.java:284)
at com.google.cloud.dataflow.sdk.util.DoFnRunnerBase$DoFnProcessContext$1.outputWindowedValue(DoFnRunnerBase.java:508)
at com.google.cloud.dataflow.sdk.util.GroupAlsoByWindowsAndCombineDoFn.closeWindow(GroupAlsoByWindowsAndCombineDoFn.java:205)
at com.google.cloud.dataflow.sdk.util.GroupAlsoByWindowsAndCombineDoFn.processElement(GroupAlsoByWindowsAndCombineDoFn.java:192)
at com.google.cloud.dataflow.sdk.util.SimpleDoFnRunner.invokeProcessElement(SimpleDoFnRunner.java:49)
at com.google.cloud.dataflow.sdk.util.DoFnRunnerBase.processElement(DoFnRunnerBase.java:139)
at com.google.cloud.dataflow.sdk.runners.worker.SimpleParDoFn.processElement(SimpleParDoFn.java:190)
at com.google.cloud.dataflow.sdk.runners.worker.ForwardingParDoFn.processElement(ForwardingParDoFn.java:42)
at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorkerLoggingParDoFn.processElement(DataflowWorkerLoggingParDoFn.java:47)
at com.google.cloud.dataflow.sdk.util.common.worker.ParDoOperation.process(ParDoOperation.java:55)
at com.google.cloud.dataflow.sdk.util.common.worker.OutputReceiver.process(OutputReceiver.java:52)
at com.google.cloud.dataflow.sdk.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:224)
at com.google.cloud.dataflow.sdk.util.common.worker.ReadOperation.start(ReadOperation.java:185)
at com.google.cloud.dataflow.sdk.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:72)
at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorker.executeWork(DataflowWorker.java:287)
at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorker.doWork(DataflowWorker.java:223)
at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorker.getAndPerformWork(DataflowWorker.java:173)
at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorkerHarness$WorkerThread.doWork(DataflowWorkerHarness.java:193)
at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorkerHarness$WorkerThread.call(DataflowWorkerHarness.java:173)
at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorkerHarness$WorkerThread.call(DataflowWorkerHarness.java:160)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

You can save a string longer than 1500 bytes by excluding the value from indexing:
Value longString = Value.newBuilder()
.setStringValue(...)
.setExcludeFromIndexes(true)
.build();
If you need compatibility with App Engine's com.google.appengine.api.datastore.Text type, you would also want to set the meaning to 15:
Value longString = Value.newBuilder()
.setStringValue(...)
.setExcludeFromIndexes(true)
.setMeaning(15)
.build();

DataStore create index for each property so there is a default limit of 1500 bytes on properties. Now if you need to store data something like big JSON then you can specify that index is not needed for this property in following way:
Entity newEntity =
Entity.newBuilder(key)
.set("time", Timestamp.parseTimestamp("1970-01-01T00:00:00Z"))
.set("message", StringValue.newBuilder(JSON).setExcludeFromIndexes(true).build())
.build();
This way you will be able to save data of bigger size rather than default limit of 1500 bytes.

To be exact:
StringValue.newBuilder(yourString).setExcludeFromIndexes(true).build()

Related

Apache Flink with Kinesis Analytics : java.lang.IllegalArgumentException: The fraction of memory to allocate should not be 0

Background :
I have been trying to setup BATCH + STREAMING in the same flink application which is deployed on kinesis analytics runtime. The STREAMING part works fine, but I'm having trouble adding support for BATCH.
Flink : Handling Keyed Streams with data older than application watermark
Apache Flink : Batch Mode failing for Datastream API's with exception `IllegalStateException: Checkpointing is not allowed with sorted inputs.`
The logic is something like this :
The logic is something like this :
streamExecutionEnvironment.setRuntimeMode(RuntimeExecutionMode.BATCH);
streamExecutionEnvironment.fromSource(FileSource.forRecordStreamFormat(new TextLineFormat(), path).build(),
WatermarkStrategy.noWatermarks(),
"Text File")
.process(process function which transforms input)
.assignTimestampsAndWatermarks(WatermarkStrategy
.<DetectionEvent>forBoundedOutOfOrderness(orderness)
.withTimestampAssigner(
(SerializableTimestampAssigner<Event>) (event, l) -> event.getEventTime()))
.keyBy(keyFunction)
.window(TumblingEventWindows(Time.of(x days))
.process(processWindowFunction);
On doing this I'm getting the below exception :
java.lang.Exception: Exception while creating StreamOperatorStateContext.
at org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.streamOperatorStateContext(StreamTaskStateInitializerImpl.java:254)
at org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:272)
at org.apache.flink.streaming.runtime.tasks.OperatorChain.initializeStateAndOpenOperators(OperatorChain.java:441)
at org.apache.flink.streaming.runtime.tasks.StreamTask.restoreGates(StreamTask.java:582)
at org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.call(StreamTaskActionExecutor.java:55)
at org.apache.flink.streaming.runtime.tasks.StreamTask.executeRestore(StreamTask.java:562)
at org.apache.flink.streaming.runtime.tasks.StreamTask.runWithCleanUpOnFail(StreamTask.java:647)
at org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:537)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:764)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:571)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: org.apache.flink.util.FlinkException: Could not restore keyed state backend for WindowOperator_90bea66de1c231edf33913ecd54406c1_(1/1) from any of the 1 provided restore options.
at org.apache.flink.streaming.api.operators.BackendRestorerProcedure.createAndRestore(BackendRestorerProcedure.java:160)
at org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.keyedStatedBackend(StreamTaskStateInitializerImpl.java:345)
at org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.streamOperatorStateContext(StreamTaskStateInitializerImpl.java:163)
... 10 more
Caused by: java.io.IOException: Failed to acquire shared cache resource for RocksDB
at org.apache.flink.contrib.streaming.state.RocksDBOperationUtils.allocateSharedCachesIfConfigured(RocksDBOperationUtils.java:306)
at org.apache.flink.contrib.streaming.state.EmbeddedRocksDBStateBackend.createKeyedStateBackend(EmbeddedRocksDBStateBackend.java:426)
at org.apache.flink.contrib.streaming.state.EmbeddedRocksDBStateBackend.createKeyedStateBackend(EmbeddedRocksDBStateBackend.java:90)
at org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.lambda$keyedStatedBackend$1(StreamTaskStateInitializerImpl.java:328)
at org.apache.flink.streaming.api.operators.BackendRestorerProcedure.attemptCreateAndRestore(BackendRestorerProcedure.java:168)
at org.apache.flink.streaming.api.operators.BackendRestorerProcedure.createAndRestore(BackendRestorerProcedure.java:135)
... 12 more
Caused by: java.lang.IllegalArgumentException: The fraction of memory to allocate should not be 0. Please make sure that all types of managed memory consumers contained in the job are configured with a non-negative weight via `taskmanager.memory.managed.consumer-weights`.
at org.apache.flink.util.Preconditions.checkArgument(Preconditions.java:160)
at org.apache.flink.runtime.memory.MemoryManager.validateFraction(MemoryManager.java:672)
at org.apache.flink.runtime.memory.MemoryManager.computeMemorySize(MemoryManager.java:653)
at org.apache.flink.runtime.memory.MemoryManager.getSharedMemoryResourceForManagedMemory(MemoryManager.java:521)
at org.apache.flink.contrib.streaming.state.RocksDBOperationUtils.allocateSharedCachesIfConfigured(RocksDBOperationUtils.java:302)
... 17 more
Seems like kinesis-analytics does not allow clients to define a flink-conf.yaml file to define taskmanager.memory.managed.consumer-weights. Is there any way around this ?
It's not clear to me what the underlying cause of this exception is, nor how to make batch processing work on KDA.
You can try this (but I'm not sure KDA will allow it):
Configuration conf = new Configuration();
conf.setString("taskmanager.memory.managed.consumer-weights", "put-the-value-here");
StreamExecutionEnvironment env =
StreamExecutionEnvironment.getExecutionEnvironment(conf);

Flink job create RocksDB instance failure

I run many jobs on Flink,and backend use rocksDB,
one of my job got error and restart all night,
error message like :
java.lang.IllegalStateException: Could not initialize keyed state backend.
at org.apache.flink.streaming.api.operators.AbstractStreamOperator.initKeyedState(AbstractStreamOperator.java:330)
at org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:221)
at org.apache.flink.streaming.runtime.tasks.StreamTask.initializeOperators(StreamTask.java:679)
at org.apache.flink.streaming.runtime.tasks.StreamTask.initializeState(StreamTask.java:666)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:253)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:708)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Error while opening RocksDB instance.
at org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend.openDB(RocksDBKeyedStateBackend.java:1063)
at org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend.access$3300(RocksDBKeyedStateBackend.java:128)
at org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend$RocksDBIncrementalRestoreOperation.restoreInstance(RocksDBKeyedStateBackend.java:1472)
at org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend$RocksDBIncrementalRestoreOperation.restore(RocksDBKeyedStateBackend.java:1569)
at org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend.restore(RocksDBKeyedStateBackend.java:996)
at org.apache.flink.streaming.runtime.tasks.StreamTask.createKeyedStateBackend(StreamTask.java:775)
at org.apache.flink.streaming.api.operators.AbstractStreamOperator.initKeyedState(AbstractStreamOperator.java:319)
Caused by: org.rocksdb.RocksDBException: Corruption: Sst file size mismatch: /mnt/dfs/3/hadoop/yarn/local/usercache/sloth/appcache/application_1526888270443_0002/flink-io-84ec9962-f37f-4fbc-8262-a215984d8d70/job-1a72a5f09ac8a80914256306363505aa_op-CoStreamFlatMap_1361_4_uuid-0b019d7f-2d28-44dc-baf2-12774ed3518f/db/008919.sst. Size recorded in manifest 132174005, actual size 2674688
Sst file size mismatch: /mnt/dfs/3/hadoop/yarn/local/usercache/sloth/appcache/application_1526888270443_0002/flink-io-84ec9962-f37f-4fbc-8262-a215984d8d70/job-1a72a5f09ac8a80914256306363505aa_op-CoStreamFlatMap_1361_4_uuid-0b019d7f-2d28-44dc-baf2-12774ed3518f/db/008626.sst. Size recorded in manifest 111956797, actual size 14286848
Sst file size mismatch: /mnt/dfs/3/hadoop/yarn/local/usercache/sloth/appcache/application_1526888270443_0002/flink-io-84ec9962-f37f-4fbc-8262-a215984d8d70/job-1a72a5f09ac8a80914256306363505aa_op-CoStreamFlatMap_1361_4_uuid-0b019d7f-2d28-44dc-baf2-12774ed3518f/db/008959.sst. Size recorded in manifest 43157714, actual size 933888
at org.rocksdb.TtlDB.openCF(Native Method)
at org.rocksdb.TtlDB.open(TtlDB.java:132)
at org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend.openDB(RocksDBKeyedStateBackend.java:1054)
... 12 more
when I found this, it kill it manually, And start it again.Then it work well.
How this error happens,I can't find any message from google or somewhere
I found the error before this exception.
No space left on device
I think this problem cost this question

no content to map due to end of input - In Camel

I am new to using Camel. I am getting expected response from the url i hit - which i have logged. But after receiving the message I get the error following error while unmarshalling it:
On delivery attempt: 0 caught: com.fasterxml.json.databind.JsonMappingException: no content to map due to end-of-input
Maybe its due to streaming - can only read once problem, and since you logged it, its empty. See this FAQ: http://camel.apache.org/why-is-my-message-body-empty.html
How I solved this issue:
I autowired the DefaultCamelContext bean in my routeBuilder class and set the stream caching to true. This will set stream caching to true globally.
#Autowired
DefaultCamelContext camelContext;
Then set stream caching to true:
camelContext.setStreamCaching(true);
Alternatively you can also set stream caching true for a single router as follows:
from("jbi:service:http://myService.org")
.streamCaching(true)
.to("jbi:service:http://myOtherService.org");

Deleting records from Salesforce via Mulesoft ESB

I am trying to delete list of records from Salesforce via Mulesoft ESB.
I have already create an example project on github and the flow xml can be found here.
The xml snippet for deleting records is below:
<sfdc:delete config-ref="Salesforce__Basic_authentication" doc:name="Salesforce">
<sfdc:ids ref="#[payload]" />
</sfdc:delete>
where, payload data type is List of string.
While deleting the records I am getting below exception:
ERROR 2015-11-05 23:39:39,755 [[tutorial].HTTP_Listener_Configuration.worker.01] org.mule.exception.DefaultMessagingExceptionStrategy:
********************************************************************************
Message : Could not serialize object (org.mule.api.serialization.SerializationException)
Type : org.mule.api.transformer.TransformerException
Code : MULE_ERROR--2
JavaDoc : http://www.mulesoft.org/docs/site/current3/apidocs/org/mule/api/transformer/TransformerException.html
Transformer : ObjectToByteArray{this=7be7e15, name='_ObjectToByteArray', ignoreBadInput=false, returnClass=SimpleDataType{type=[B, mimeType='*/*', encoding='null'}, sourceTypes=[SimpleDataType{type=java.io.Serializable, mimeType='*/*', encoding='null'}, SimpleDataType{type=java.io.InputStream, mimeType='*/*', encoding='null'}, SimpleDataType{type=java.lang.String, mimeType='*/*', encoding='null'}, SimpleDataType{type=org.mule.api.transport.OutputHandler, mimeType='*/*', encoding='null'}]}
********************************************************************************
Exception stack is:
1. com.sforce.soap.partner.DeleteResult (java.io.NotSerializableException)
java.io.ObjectOutputStream:1184 (null)
2. java.io.NotSerializableException: com.sforce.soap.partner.DeleteResult (org.apache.commons.lang.SerializationException)
org.apache.commons.lang.SerializationUtils:111 (null)
3. Could not serialize object (org.mule.api.serialization.SerializationException)
org.mule.serialization.internal.AbstractObjectSerializer:68 (http://www.mulesoft.org/docs/site/current3/apidocs/org/mule/api/serialization/SerializationException.html)
4. Could not serialize object (org.mule.api.serialization.SerializationException) (org.mule.api.transformer.TransformerException)
org.mule.transformer.simple.SerializableToByteArray:66 (http://www.mulesoft.org/docs/site/current3/apidocs/org/mule/api/transformer/TransformerException.html)
********************************************************************************
Root Exception stack trace:
java.io.NotSerializableException: com.sforce.soap.partner.DeleteResult
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1184)
at java.io.ObjectOutputStream.writeArray(ObjectOutputStream.java:1378)
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1174)
at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)
at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509)
at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348)
at org.apache.commons.lang.SerializationUtils.serialize(SerializationUtils.java:108)
at org.apache.commons.lang.SerializationUtils.serialize(SerializationUtils.java:133)
at org.mule.serialization.internal.JavaObjectSerializer.doSerialize(JavaObjectSerializer.java:44)
at org.mule.serialization.internal.AbstractObjectSerializer.serialize(AbstractObjectSerializer.java:64)
at org.mule.transformer.simple.SerializableToByteArray.doTransform(SerializableToByteArray.java:62)
at org.mule.transformer.simple.ObjectToByteArray.doTransform(ObjectToByteArray.java:78)
at org.mule.transformer.AbstractTransformer.transform(AbstractTransformer.java:415)
at org.mule.DefaultMuleMessage.getPayload(DefaultMuleMessage.java:425)
at org.mule.DefaultMuleMessage.getPayload(DefaultMuleMessage.java:373)
at org.mule.DefaultMuleMessage.getPayloadAsBytes(DefaultMuleMessage.java:714)
at org.mule.module.http.internal.listener.HttpResponseBuilder.build(HttpResponseBuilder.java:175)
at org.mule.module.http.internal.listener.HttpMessageProcessorTemplate.sendResponseToClient(HttpMessageProcessorTemplate.java:97)
at org.mule.execution.AsyncResponseFlowProcessingPhase.runPhase(AsyncResponseFlowProcessingPhase.java:83)
at org.mule.execution.AsyncResponseFlowProcessingPhase.runPhase(AsyncResponseFlowProcessingPhase.java:38)
at org.mule.execution.PhaseExecutionEngine$InternalPhaseExecutionEngine.phaseSuccessfully(PhaseExecutionEngine.java:65)
at org.mule.execution.PhaseExecutionEngine$InternalPhaseExecutionEngine.phaseSuccessfully(PhaseExecutionEngine.java:69)
at com.mulesoft.mule.throttling.ThrottlingPhase.runPhase(ThrottlingPhase.java:185)
at com.mulesoft.mule.throttling.ThrottlingPhase.runPhase(ThrottlingPhase.java:1)
at org.mule.execution.PhaseExecutionEngine$InternalPhaseExecutionEngine.process(PhaseExecutionEngine.java:114)
at org.mule.execution.PhaseExecutionEngine.process(PhaseExecutionEngine.java:41)
at org.mule.execution.MuleMessageProcessingManager.processMessage(MuleMessageProcessingManager.java:32)
at org.mule.module.http.internal.listener.DefaultHttpListener$1.handleRequest(DefaultHttpListener.java:126)
at org.mule.module.http.internal.listener.grizzly.GrizzlyRequestDispatcherFilter.handleRead(GrizzlyRequestDispatcherFilter.java:83)
at org.glassfish.grizzly.filterchain.ExecutorResolver$9.execute(ExecutorResolver.java:119)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeFilter(DefaultFilterChain.java:283)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeChainPart(DefaultFilterChain.java:200)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.execute(DefaultFilterChain.java:132)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.process(DefaultFilterChain.java:111)
at org.glassfish.grizzly.ProcessorExecutor.execute(ProcessorExecutor.java:77)
at org.glassfish.grizzly.nio.transport.TCPNIOTransport.fireIOEvent(TCPNIOTransport.java:536)
at org.glassfish.grizzly.strategies.AbstractIOStrategy.fireIOEvent(AbstractIOStrategy.java:112)
at org.mule.module.http.internal.listener.grizzly.ExecutorPerServerAddressIOStrategy.run0(ExecutorPerServerAddressIOStrategy.java:102)
at org.mule.module.http.internal.listener.grizzly.ExecutorPerServerAddressIOStrategy.access$100(ExecutorPerServerAddressIOStrategy.java:30)
at org.mule.module.http.internal.listener.grizzly.ExecutorPerServerAddressIOStrategy$WorkerThreadRunnable.run(ExecutorPerServerAddressIOStrategy.java:125)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
You are actually deleting successfully. The problem is that your Mule flow ends after making the Salesforce call, and by default it tries to return the current payload. That payload is the result of the SF call, which is of type com.sforce.soap.partner.DeleteResult, and Mule doesn't know how to serialize it.
To start, try simply adding a "Set payload" component after the SF call and set payload to anything you want, say "hello world". Once you understand what's happening, you can add a groovy script to process the DeleteResult and actually determine if the deletion was successful or not, and possibly do something if it wasn't.
In this case, if you don't want to explicitly change the payload and want the exact payload being sent out of the SFDC connector, then try having an object to json connector after the sfdc connector and a logger to log the payload in this case.

How to send CSV into file in straming mode where i get input in "java.io.PipedInputStream#1bca8e6" format in mule esb?

My requirement is, I get input xml and based on some condition checking(using choice) I need to send into 2 different files. As I'm getting big file(100-400MB) I'm sending in stream mode(by enable streaming in file and datamapper components).
It is working fine for small size input xml(10-20MB). But when I give large input xml. Condition checking and XML to CSV conversion is working fine but while writing CSVB-data I'm getting error message.,
INFO 2015-09-08 12:03:49,227 [[simplebatch_1].simplebatchFlow.stage1.02] org.mule.api.processor.LoggerMessageProcessor: default logger java.io.PipedInputStream#1bca8e6
INFO 2015-09-08 12:03:49,258 [[simplebatch_1].File1.dispatcher.01] org.mule.lifecycle.AbstractLifecycleManager: Initialising: 'File1.dispatcher.29118412'. Object is: FileMessageDispatcher
INFO 2015-09-08 12:03:49,258 [[simplebatch_1].File1.dispatcher.01] org.mule.lifecycle.AbstractLifecycleManager: Starting: 'File1.dispatcher.29118412'. Object is: FileMessageDispatcher
INFO 2015-09-08 12:03:49,258 [[simplebatch_1].File1.dispatcher.01] org.mule.transport.file.FileConnector: Writing file to: D:\MulePOC's\output\myoutput1
ERROR 2015-09-08 12:03:54,999 [XML_READER0_0] org.jetel.graph.Node: java.lang.OutOfMemoryError: Java heap space
ERROR 2015-09-08 12:03:55,000 [WatchDog_0] org.jetel.graph.runtime.WatchDog: Component [XML READER:XML_READER0] finished with status ERROR.
Java heap space
Please suggest me on this., Thanks..,
You need increase your JVM memory for Mule. You can found the config file in $MULE_HOME/conf/wrapper.conf
You will found something like this:
# Increase Permanent Generation Size from default of 64m
# Increase this value if you get "Java.lang.OutOfMemoryError: PermGen space error"
# This property is not used when running java 8 and may cause a warning.
wrapper.java.additional.7=-XX:PermSize=256m
wrapper.java.additional.8=-XX:MaxPermSize=256m
# GC settings
wrapper.java.additional.9=-XX:+HeapDumpOnOutOfMemoryError
wrapper.java.additional.10=-XX:+AlwaysPreTouch
wrapper.java.additional.11=-XX:+UseParNewGC
wrapper.java.additional.12=-XX:NewSize=512m
wrapper.java.additional.13=-XX:MaxNewSize=512m
wrapper.java.additional.14=-XX:MaxTenuringThreshold=8
You can change this configurations as you like.

Resources