I am new to Mule. I have to do the following task.
A file is at some location. I need to move that file to some other location. The criteria to select the location is based on file name.
Suppose, the file name is 'abc_loc1'. Then this file is to be moved into folder Location1. If the file name is 'abc_loc2', it should be moved into Location2.
You can use the Mule file transport with inbound and outbound endpoints to move files, and either set a dynamic path attribute for the outbound, or use choice routing based on the original filename. You will have the original file name available as #[message.inboundProperties.originalFilename].
UPDATE (example flow):
<file:connector name="File"/>
<flow name="exampleFlow">
<file:inbound-endpoint connector-ref="File" path="/tmp/1" responseTimeout="10000" />
<set-variable variableName="myPath" value="#[message.inboundProperties['originalFilename'].substring(message.inboundProperties['originalFilename'].indexOf('_')+1)]" />
<file:outbound-endpoint path="/tmp/#[flowVars['myPath']]" responseTimeout="10000" connector-ref="File" outputPattern="error#[message.inboundProperties['originalFilename']]"/>
</flow>
UPDATE 2:
to use choice routing replace the above file-outbound with something like this:
<choice>
<when expression="#[flowVars['myPath'] == '1']">
<file:outbound-endpoint path="/tmp/1" responseTimeout="10000" connector-ref="File" outputPattern="error#[message.inboundProperties['originalFilename']]"/>
</when>
<when expression="#[flowVars['myPath'] == '2']">
<file:outbound-endpoint path="/tmp/2" responseTimeout="10000" connector-ref="File" outputPattern="error#[message.inboundProperties['originalFilename']]"/>
</when>
</choice>
Related
I get a file path as an input to mule inside xml. Using XPATH expression, I am able to extract the path. I want to read a particular file from that path. I tried to define file inbound endpoint as below. But it doesn't seem to be working.
<flow name="flow1">
....
....
<set-session-variable variableName="filePath" value="#[xpath://filePath]" />
<flow-ref name="fileFlow"/>
</flow>
<flow name="fileFlow">
<file:inbound-endpoint path="#[header:SESSION:filePath]" />
</flow>
My understanding here is that no code can be placed before an inbound-endpoint. Hence I defined it in another flow. Please suggest if there is a way to read the file from a specified path.
Unfortunately, you cannot programmatically call an inbound-endpoint like that.
However the same functionality can be achieved using the Mule requester module:
Example:
<flow name="RequestFile" doc:name="RequestFile">
<http:inbound-endpoint exchange-pattern="request-response" host="localhost" port="8081" path="requestfile" doc:name="HTTP"/>
<mulerequester:request config-ref="Mule_Requester" resource="file:///s/tmp/demorequester/read/#[message.inboundProperties['filename']]" returnClass="java.lang.String" doc:name="Request a file"/>
</flow>
Instructions here: https://github.com/mulesoft/mule-module-requester and https://blogs.mulesoft.com/dev/mule-dev/introducing-the-mule-requester-module/
I have a file inbound endpoint with a zip file coming in. If an error occurs during the flow I use a file outbound endpoint with the output pattern
[message.outboundProperties.originalFileName]
The zip file is then corrupt and cannot be opened by windows. The zip file has also increased in size. Anyone know whats going on?
The code
</spring:beans>
<file:connector name="fileConnectorNonStreaming"
autoDelete="true" doc:name="File" streaming="false"
validateConnections="true">
<service-overrides messageFactory="org.mule.transport.file.FileMuleMessageFactory" />
</file:connector>
<smtp:connector name="SMTP" validateConnections="true"
doc:name="SMTP" />
<file:connector name="fileStreamingConnector"
autoDelete="true" streaming="true" validateConnections="true"
doc:name="File" />
<custom-transformer class="org.mule.transformer.simple.ObjectToString"
name="ObjectToStringTransformer" doc:name="Java"></custom-transformer>
<jdbc:connector name="JDBC" dataSource-ref="MySQL_Data_Source"
validateConnections="true" queryTimeout="-1" pollingFrequency="0"
doc:name="JDBC">
<jdbc:query key="commitFileNames"
value="insert into ${sta.database.source.table} (NAME, REF, DATE, TIME, DESCRIPTION) values (#[message.payload[0].toString()], #[message.payload[1].toString()], (DATE_FORMAT(STR_TO_DATE(#[message.payload[2].toString()], '%d-%m-%Y'), '%Y-%m-%d')) , #[message.payload[3].toString()], #[message.payload[4].toString()]) ON DUPLICATE KEY UPDATE S_FILENAME = #[message.payload[0].toString()], S_DEBTREF = #[message.payload[1].toString()], S_DATE = (DATE_FORMAT(STR_TO_DATE(#[message.payload[2].toString()], '%d-%m-%Y'), '%Y-%m-%d')), S_TIME = #[message.payload[3].toString()] ,S_DESCRIPTION = #[message.payload[4].toString()]" />
</jdbc:connector>
<jdbc:mysql-data-source name="MySQL_Data_Source"
user="root" password="root" url="${database.mysql.url}${database.schema}"
transactionIsolation="UNSPECIFIED" doc:name="MySQL Data Source" />
<flow name="getfilenames" doc:name="getfilenames">
<file:inbound-endpoint doc:name="File"
responseTimeout="10000" path="${csv.zipped.files}" connector-ref="fileConnectorNonStreaming"
fileAge="2000"></file:inbound-endpoint>
<set-variable variableName="unzippedFilesDestinationPath"
value="${csv.unzipped.files.archive}" doc:name="Store the path of the target file to be unzipped" />
<set-variable variableName="targetFileName" value="${file.compressed.name}"
doc:name="Store the file name of the target file to be unzipped" />
<component class="com.mule.file.FileDeleter"
doc:name="Delete unzipped file from directory" />
<set-variable variableName="sourceZipFile" value="#[payload]"
doc:name="Store the incoming zip file" />
<component class="com.mule.file.ZipDeCompressor"
doc:name="Unzip and aquire target csv file" />
<set-variable variableName="unzippedFile" value="#[payload]"
doc:name="Store the unzipped file" />
<component class="com.influentialsoftware.sta.mule.csv.CsvParser"
doc:name="Parse CSV and validate" />
<foreach doc:name="For Each">
<jdbc:outbound-endpoint queryKey="commitFileNames"
connector-ref="JDBC" doc:name="Insert data" exchange-pattern="one-way"
queryTimeout="-1"></jdbc:outbound-endpoint>
</foreach>
<choice-exception-strategy doc:name="Choice Exception Strategy">
<catch-exception-strategy doc:name="Catch Exception Strategy"
when="#[exception.causedBy(java.io.IOException)]">
<file:outbound-endpoint path="${file.base.directory.errors}"
outputPattern="#[message.outboundProperties.originalFileName]"
responseTimeout="10000" connector-ref="fileStreamingConnector"
doc:name="toError"></file:outbound-endpoint>
Using FileMuleMessageFactory in fileConnectorNonStreaming sets the payload to File, while your output would prefer a byte array. Try adding a <file:file-to-byte-array-transformer /> before your file:outbound-endpoint.
I am using Mule 3.4 and I try to send a file from a folder in the mail.
The console displays:
connector.file.mule.default.receiver.01] org.mule.transport.file.FileMessageReceiver: Lock obtained on file: C:\Users\bekbol\Documents\smtp\test.txt
My config file is below:
<flow name="outcomingSmtp" doc:name="outcomingSmtp">
<file:inbound-endpoint path="${file.outcomingSmtp}" responseTimeout="100000" doc:name="File" pollingFrequency="10000" moveToDirectory="${file.outcomingBackupSmtp}">
<file:filename-wildcard-filter
pattern="*.txt" />
<file:file-to-string-transformer doc:name="File to String"/>
</file:inbound-endpoint>
<object-to-byte-array-transformer doc:name="Object to Byte Array"/>
<smtp:outbound-endpoint host="${smtp.host}" port="${smtp.port}" user="${email.username}" password="${email.password}" to="${header.to}" from="${header.from}" subject="${header.subject}" responseTimeout="100000" mimeType="text/plain" doc:name="SMTP">
<email:string-to-email-transformer doc:name="String to Email"/>
</smtp:outbound-endpoint>
</flow>
I don't think you need
<file:file-to-string-transformer doc:name="File to String"/>
nested inside your <file:inbound-endpoint>. Move it outside right after you close </file:inbound-endpoint> and
remove <object-to-byte-array-transformer doc:name="Object to Byte Array"/>
I have a requirement like transferring a file from a inbound directory to a outbound directory using file connector in Mule. While transferring the file it is processed in a working directory configured in the input file connector.
Now, my requirement is , if I place an old file in the file input directory, the file in the working directory should have the current time stamp on the system date modified.
It is something similar like "Touch" command used in Unix to set the system modified date.
Please not I don't want to use any Groovy Script method or any other hack method that can affect the performance in order to achieve this.
Following is my Mule mflow:-
<file:connector name="File" autoDelete="true" streaming="true" validateConnections="true" doc:name="File" outputAppend="true"/>
<file:connector name="File1" autoDelete="false" streaming="false" validateConnections="true" doc:name="File"/>
<flow name="FileReadandDeleteFlow1" doc:name="FileReadandDeleteFlow1">
<file:inbound-endpoint responseTimeout="10000" doc:name="File" connector-ref="File" moveToDirectory="E:\backup\test_workingDir" path="E:\backup\test" moveToPattern="processingFile.xml">
</file:inbound-endpoint>
<file:outbound-endpoint path="E:\backup\test_out" outputPattern="Finaloutput.txt" responseTimeout="10000" connector-ref="File1" doc:name="File"/>
Thanks in advance
You can #[function:dateStamp] or #[function:datestamp:dd-MM-yy] to achieve this as described in this link
An example would be :
<file:outbound-endpoint path="E:\backup\test_out" outputPattern="Finaloutput_[function:dateStamp].txt" responseTimeout="10000" connector-ref="File1" doc:name="File"/>
EDIT:
To always show the current timestamp to your files in working firectory, you can create another flow which reads files from working directory at a specific interval, and just copy them to same directory using file:outbound-endpoint
we have mel using this we cal achieve current date and time
[server.dateTime.format("yyyyMMddhhmmss")].txt
The format u like we can set in the expression.
This worked for me
<file:outbound-endpoint path="YOUR_PATH" outputPattern="#[function:datestamp:yyyyMMdd-HHmmssSSSSSS]
_#[message.inboundProperties.originalFilename]" responseTimeout="10000" doc:name="Backup In Mule"/>
We are using the below configuration to send a file from single source to multiple remote destinations.
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:camel="http://camel.apache.org/schema/spring"
xmlns:util="http://www.springframework.org/schema/util"
xsi:schemaLocation="
http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util.xsd
http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd">
<routeContext id="gcgRatesOutbound" xmlns="http://camel.apache.org/schema/spring">
<route id="gcgRatesFileOut">
<from uri="file:{{nas.root}}/{{gcg.out.prices.dir}}?delay={{poll.delay}}&initialDelay={{initial.delay}}&readLock=rename&scheduledExecutorService=#scheduledExecutorService" />
<multicast stopOnException="true">
<to uri="scp://{{gcg.ste.Client1_User_Name}}#{{gcg.ste.Host_Name}}:{{gcg.ste.Port_Number}}/{{gcg.ste.Destination_Client1}}?knownHostsFile={{ssh.knownHosts}}&privateKeyFile={{ssh.privateKey}}" />
<to uri="scp://{{gcg.ste.Client2_User_Name}}#{{gcg.ste.Host_Name}}:{{gcg.ste.Port_Number}}/{{gcg.ste.Destination_Client2}}?knownHostsFile={{ssh.knownHosts}}&privateKeyFile={{ssh.privateKey}}" />
<to uri="scp://{{gcg.ste.Client3_User_Name}}#{{gcg.ste.Host_Name}}:{{gcg.ste.Port_Number}}/{{gcg.ste.Destination_Client3}}?knownHostsFile={{ssh.knownHosts}}&privateKeyFile={{ssh.privateKey}}" />
</multicast>
</route>
</routeContext>
</beans>
Using the above confirguration the file reaches the remote destinations, which confirmed that the connection to all the remote destinations were successfull.
We need that the file should be moved to the archive folder after the file has been successfully transfered to all the remote destinations.
And should move to error folder incase of any error.
However, when I add the archival code ( element) as child element to the multicast element in the above configuration and use the and the tag to move the file to error folder incase of an error. The file does not reach the remote destinations.
<doTry>
<multicast stopOnException="true" parallelProcessing="true">
<to uri="scp://{{gcg.ste.Client1_User_Name}}#{{gcg.ste.Host_Name}}:{{gcg.ste.Port_Number}}/{{gcg.ste.Destination_Client1}}?knownHostsFile={{ssh.knownHosts}}&privateKeyFile={{ssh.privateKey}}" />
<to uri="scp://{{gcg.ste.Client2_User_Name}}#{{gcg.ste.Host_Name}}:{{gcg.ste.Port_Number}}/{{gcg.ste.Destination_Client2}}?knownHostsFile={{ssh.knownHosts}}&privateKeyFile={{ssh.privateKey}}" />
<to uri="scp://{{gcg.ste.Client3_User_Name}}#{{gcg.ste.Host_Name}}:{{gcg.ste.Port_Number}}/{{gcg.ste.Destination_Client3}}?knownHostsFile={{ssh.knownHosts}}&privateKeyFile={{ssh.privateKey}}" />
<to uri="file://{{nas.root}}/{{gcg.out.prices.dir}}?fileName={{archive.dir}}" />
</multicast>
<doCatch>
<exception>java.lang.Exception</exception>
<handled>
<constant>true</constant>
</handled>
<to uri="file://{{nas.root}}/{{gcg.out.prices.dir}}?fileName={{error.dir}}" />
</doCatch>
</doTry>
The file does not reach the remote destinations nor does it produce any log and the file is moved to archive folder.
I tried placing the remote destinations, the archival code and the move to error code in seperate routes and have its reference in a single multicast as below
<to uri="direct:Client1FileOut" />
<to uri="direct:Client2FileOut" />
<to uri="direct:Client3FileOut" />
<to uri="direct:MoveToArchive" />
<route id="gcgFileOut1">
<from uri="direct:Client1FileOut" />
<doTry>
<to uri="scp://{{gcg.ste.Client1_User_Name}}#{{gcg.ste.Host_Name}}:{{gcg.ste.Port_Number}}/{{gcg.ste.Destination_Client1}}?knownHostsFile={{ssh.knownHosts}}&privateKeyFile={{ssh.privateKey}}" />
<doCatch>
<exception>java.lang.Exception</exception>
<handled>
<constant>true</constant>
</handled>
<to uri="direct:gcgError" />
</doCatch>
</doTry>
</route>
However, The file does not reach the remote destinations nor does it produce any log and the file is moved to archive folder.
I am new to camel.
I tried using the shareUnitOfWork attribute as below
<multicast shareUnitOfWork="true">
However, the file moved to archive folder along with Test_2_19082013_3.txt.camelLock file
Not sure why this file is also moved to archive when the file dropped was Test_2_19082013_3.txt
Also there is folder named "ARCHIV~1" created in the drop location.
Something might went wrong when moving the file to archive. You have stopOnException="true" parallelProcessing="true" and the local file is probably done first as it should be the fastest.
You probably want to print the error somewhere. Now, you catch the exception and mark the error as handled. Still you expect logs. Use the log component and output some log statements manually. Not only in case or error but also in case of success. You can log in the debug level so that you can manually enable log outprint in case you need it - like now.
Another option for you to figure out what's going on is to enable trace which will make you less "blind".
That said about debugging on your own -
You are archiving and storeing error messages in the input directory. The lock file you see is when Camel is reading. This is likely what's casuing your application to malfunction.
{{nas.root}}/{{gcg.out.prices.dir}}
The fileName=... is the filename, not another sub directory.
So: file://{{nas.root}}/{{gcg.out.prices.dir}}/{{archive.dir}} should do it (likewise for the error path).