FileConnector with Start Delay - file

Hi is there any option to provide start Delay with file:inbound-endpoint?(FileConnector)
(I can see quartz is the alternate solution for this)
<flow name="ReadingFlow" processingStrategy="synchronous">
<file:inbound-endpoint path="{file.incoming.files}"
connector-ref="DefaultNoStreamingConnector"
pollingFrequency="${file.polling.frequency}">
</file:inbound-endpoint>

Switch to fixed frequency. It has such parameter tp delay at first check
<ftp:listener doc:name="On New or Updated File" >
<scheduling-strategy >
<fixed-frequency startDelay="123" />
</scheduling-strategy>
</ftp:listener>

No, there isn't. You could use a Poll scope as an alternative to start the flow, but an inbound endpoint can not be used in the middle of a flow because it is a message source (starts a flow), not an operation. You can use the Mule Requester Module to reference an inbound endpoint inside the flow, after the Poll started the flow.
I didn't mention the Quartz endpoint because it has been deprecated for years. The Poll scope replaces it.

Related

Apache Camel route with no "to" endpoint

I am using Apache Camel to assist with capturing message data emitted by a third party software package. In this particular instance, I only need to capture what is produced by the software, there is no receiver on the other end (really no "end" to go to).
So, I tried to set up a route with just the "from" endpoint and no "to" endpoint. Apparently this is incorrect usage as I received the following exception:
[2018-08-15 11:08:03.205] ERROR: string.Launcher:191 - Exception
org.apache.camel.FailedToCreateRouteException: Failed to create route route1 at: >>> From[mina:udp://localhost:9877?sync=false] <<< in route: Route(route1)[[From[mina:udp://localhost:9877?sync=false]] -... because of Route route1 has no output processors. You need to add outputs to the route such as to("log:foo").
at org.apache.camel.model.RouteDefinition.addRoutes(RouteDefinition.java:1063)
at org.apache.camel.model.RouteDefinition.addRoutes(RouteDefinition.java:196)
at org.apache.camel.impl.DefaultCamelContext.startRoute(DefaultCamelContext.java:974)
at org.apache.camel.impl.DefaultCamelContext.startRouteDefinitions(DefaultCamelContext.java:3301)
at org.apache.camel.impl.DefaultCamelContext.doStartCamel(DefaultCamelContext.java:3024)
at org.apache.camel.impl.DefaultCamelContext.access$000(DefaultCamelContext.java:175)
at org.apache.camel.impl.DefaultCamelContext$2.call(DefaultCamelContext.java:2854)
at org.apache.camel.impl.DefaultCamelContext$2.call(DefaultCamelContext.java:2850)
at org.apache.camel.impl.DefaultCamelContext.doWithDefinedClassLoader(DefaultCamelContext.java:2873)
at org.apache.camel.impl.DefaultCamelContext.doStart(DefaultCamelContext.java:2850)
at org.apache.camel.support.ServiceSupport.start(ServiceSupport.java:61)
at org.apache.camel.impl.DefaultCamelContext.start(DefaultCamelContext.java:2819)
at {removed}.Launcher.startCamel(Launcher.java:189)
at {removed}.Launcher.main(Launcher.java:125)
Caused by: java.lang.IllegalArgumentException: Route route1 has no output processors. You need to add outputs to the route such as to("log:foo").
at org.apache.camel.model.RouteDefinition.addRoutes(RouteDefinition.java:1061)
... 13 more
How do I set up a camel route that allows me to intercept (capture) the message traffic coming from the source, and not send it "to" anything? There is no need for a receiver. What would be an appropriate "to" endpoint that just drops everything it receives?
The exception suggestion of to("log:foo"). What does this do?
You can see if the Stub component can help
http://camel.apache.org/stub.html
Example:
from("...")
.to("stub:nowhere");
The exception suggestion of to("log:foo"). What does this do?
It sends your route messages to an endpoint with a component of type log:
(http://camel.apache.org/log.html) - component which basically dumps message contents (body and/or headers and/or properties) to your log file using appropriate log category.
If you just want to drop everything received, it's a good choice:
to("log:com.company.camel.sample?level=TRACE&showAll=true&multiline=true")
Apparently if you're under Linux or Unix, you can also redirect to /dev/null like in this example:
to( "file:/dev?fileName=null")
I am not sure it can be used on Windows but I don't think so.
Note that the syntax: to( "file:/dev/null") does not work as it point to a directory called null but with the fileName option it will work.

Camel errorHandler / deadLetterChannel REST response

I have a Camel rest endpoint (Jetty) which validates and processes incoming requests. Besides specific Exception handlers (onException) it uses a DLQ error handler (errorHandler(deadLetterChannel...)) which is setup to retry 3 times - if unsuccessful the message is moved to the DLQ.
My question is, how do I still return a user friendly error message back to the client if an unexpected Exception occurs rather than the full Exception body? Is there some config I'm missing on the errorHandler?
I've tried to find some examples on the camel unit tests (DeadLetterChannelHandledExampleTest) and camel in action 2 (Chapter 11) but none seemed to have specific examples for this scenario.
Code is:
.from(ROUTE_URI)
.errorHandler(deadLetterChannel("{{activemq.webhook.dlq.queue}}")
.onPrepareFailure(new FailureProcessor())
.maximumRedeliveries(3)
.redeliveryDelay(1000))
.bean(ParcelProcessor.class, "process");
Thank you for your help!
Use a 2nd route as the DLQ, eg direct:dead and then send the message first to the real DLQ, and then do the message transformation afterwards to return a friendly response.
errorHandler(deadLetterChannel("direct:dead")
from("direct:dead")
.to("{{activemq.webhook.dlq.queue}}")
.transform(constant("Sorry something was wrong"));

Howto configure camel/spring JMS for exactly once in order processing (EOIO)?

I am trying to realize a exactly once in order processing of JMS messages (consumed from an Oracle Advanced Queue - but this shouldn't matter).
For testing purposes I've created the following route:
<route id="inputAqTest12">
<from id="_fromAqTest12" uri="aqTest12:queue:QUEPOSTDATA?transacted=true"/>
<transacted id="_transactedAqTest12"/>
<!-- Simulate exceptions! -->
<when id="_when1">
<simple>${random(10)} == 0</simple>
<throwException exceptionType="java.lang.Exception"
id="_throwException1" message="Random error!"/>
</when>
<to .../>
</route>
This works as expected as long as my Oracle queue is configured for immediate re-delivery of rejected messages. But I want to have retry delay greater than zero so that the message survives the outage of a back-end system for several minutes - without 'endless' unsuccessful attempts.
So what I'm looking for is the possibility to delay the retrieval of the next message in case of a rollback of the transaction. What I have found is that the spring DefaultMessageListenerContainer has a waitBeforeRecoveryAttempt() method that is called if message processing fails (https://github.com/spring-projects/spring-framework/blob/master/spring-jms/src/main/java/org/springframework/jms/listener/DefaultMessageListenerContainer.java#L1070). But unfortunately this only happens after the second failure in a row; I think I need the delay already after the first rollback, so that I'm able to gain EOIO by configuring the delay after an rollback on the jms consumer to e.g. 10 seconds, but the retry_delay of the oracle advanced queue to a lower value (e.g. 5 seconds).
Maybe I'm also on the wrong track - so any idea how-to realize EOIO behavior for a spring/camel JMS consumer is welcome. Thanks in advance for your support!

Camel ActiveMQ client blocking, temp storage usage immediately hits 100%

I'm seeing 100% utilisation of activemq's temp storage (configured to be 100mb), and the activemq client is blocking. This 100% usage remains permanently, and I have no idea what's going on
I have a camel route, which consumes from a queue (QUEUE.IN) using the JmsTransactionManager.
public final class RouteUnderTest extends RouteBuilder {
#Override
public void configure() throws Exception {
from("activemq-transacted:QUEUE.IN")
.bean(myBean)
.to("activemq:QUEUE.OUT");
}
}
While processing the message from this queue I'm invoking a spring-integration client (myBean) which is configured as follows
<int:gateway id="myBean" service-interface="MyBean">
<int:method name="request" request-channel="channel"/>
</int:gateway>
<int:chain input-channel="channel">
<int:transformer ref="transformedToJsonHere"/>
<jms:outbound-gateway request-destination-name="QUEUE.MYBEAN"
receive-timeout="5000"
explicit-qos-enabled="true"
time-to-live="5000"
delivery-persistent="false"/>
<int:transformer ref="transformedToAnObjectHere"/>
</int:chain>
My broker is configured to use LevelDB, and with the following usage limits:
<persistenceAdapter>
<levelDB directory="${activemq.data}/leveldb"/>
</persistenceAdapter>
<systemUsage>
<systemUsage>
<memoryUsage>
<memoryUsage percentOfJvmHeap="70"/>
</memoryUsage>
<storeUsage>
<storeUsage limit="500 mb"/>
</storeUsage>
<tempUsage>
<tempUsage limit="100 mb"/>
</tempUsage>
</systemUsage>
</systemUsage>
When my route consumes the message and then attempts to put a non-persistent message on QUEUE.OUT the client is blocked and my broker shows 100% usage of temp storage.
And I see the following activemq logs
2015-07-28 15:44:59,678 | INFO | Usage(default:temp:queue://QUEUE.MYBEAN:temp) percentUsage=0%, usage=104857600, limit=104857600, percentUsageMinDelta=1%;Parent:Usage(default:temp) percentUsage=100%, usage=104857600, limit=104857600, percentUsageMinDelta=1%: Temp Store is Full (0% of 104857600). Stopping producer (ID:orbit-vm-55561-1438094698190-1:1:3:1) to prevent flooding queue://QUEUE.MYBEAN. See http://activemq.apache.org/producer-flow-control.html for more info (blocking for: 1s) | org.apache.activemq.broker.region.Queue | ActiveMQ NIO Worker 6
The queues look like (You can see that the QUEUE.IN message has been not been dequeued because it's still being processed transactionally, and no message has gone to QUEUE.MYBEAN)
I can fix this problem with any one of the following approaches:
Use KahaDB instead of LevelDB
Increase temp storage limit (150MB seems to do it but I haven't experimented a great deal)
Configure tempDataStore in activemq.xml (see below)
When configuring the tempDataStore it looks like:
<tempDataStore>
<bean xmlns="http://www.springframework.org/schema/beans" class="org.apache.activemq.leveldb.LevelDBStore">
<property name="directory" value="${activemq.data}/tmp" />
</bean>
</tempDataStore>
I should add, we were using KahaDB previously and this worked fine, but the upgrade to LevelDB has exposed this issue. Reverting to KahaDB is not an option.
I'm hoping someone could explain what we're seeing here, as the results are really difficult to understand. Why does using LevelDB necessitate a higher temp usage limit?, and why does configuring the tempDataStore explicitly also fix the problem?
I don't fully understand what's going on here so I'm worried that simply increasing the temp usage limit a little will just hide the problem until a later date.
Versions:
ActiveMQ: 5.11.1
Camel: 2.14.0
Spring: 4.0.8.RELEASE
Spring Integration: 4.0.5.RELEASE
We ran into exactly the same issue with ActiveMQ 5.13.2
The solution when using LevelDB is to explicitly configure a dedicated tempDataStore as you did.
If not, the broker uses the same store (LevelDB) for both persistent (persistent usage) and non-persistent messages (temp usage). You may therefore end-up in situations where the broker doesn't accept any non-persistent message anymore just because the store already holds persistent ones up to the configured tempUsage limit. It will however accept persistent ones if your storeUsage limit is set higher...
When using KahaDB, the broker automatically uses another store for the non-persistent messages (created in the tmp directory). So you don't have the problem...
Look at the following code for more indepth information: https://github.com/apache/activemq/blob/activemq-5.13.2/activemq-broker/src/main/java/org/apache/activemq/broker/BrokerService.java#L1739
When reading that code, remember LevelDBStore implements PListStore, but KahaDBStore doesn't...

Mule - Dynamic file inbound endpoint error

My ESB flow needs to get files from a dynamic folder. This folder name changes based on month and year. Hence, I configured my inbound-endpoint as shown below but I am getting below error. I really appreciate any help on this.
Flow:
<flow name="DataMapperTestFlow" doc:name="DataMapperTestFlow">
<file:inbound-endpoint path="C:\#[new Date().format('yyyy\\MMMM')]" moveToDirectory="C:\#[new Date().format('yyyy\\MMMM')]\backup" pollingFrequency="10000" responseTimeout="10000" doc:name="File">
<file:filename-regex-filter pattern=".*.xls" caseSensitive="true"/>
</file:inbound-endpoint>
<custom-transformer class="ExcelToJava" doc:name="Java"/>
<jdbc-ee:outbound-endpoint exchange-pattern="one-way" queryKey="insertTestHeaders" connector-ref="NewDatabase" doc:name="InsertHeaders"/>
<set-payload value="#[payload.excelData.excelRows]" doc:name="Set Payload"/>
<jdbc-ee:outbound-endpoint exchange-pattern="one-way" queryKey="insertTestRows" connector-ref="NewDatabase" doc:name="InsertRows"/>
</flow>
Error:
org.mule.api.endpoint.MalformedEndpointException: The endpoint
"file:///C:/#[new Date().format('yyyy/MMMM')]" is malformed and cannot
be parsed. If this is the name of a global endpoint, check the name
is correct, that the endpoint exists, and that you are using the
correct configuration (eg the "ref" attribute). Note that names on
inbound and outbound endpoints cannot be used to send or receive
messages; use a named global endpoint instead.. Only Outbound
endpoints can be dynamic
"Only Outbound endpoints can be dynamic" quite says it all. You can have a look at the Mule Requester Module if it suits your needs, or try creating endpoints/flows programmatically with a scheduler and Java/Groovy/etc code.

Resources