WSO2 How to transform file - file

Now I have a local file just like :
<userCode>001</userCode><productCode>001</productCode><Fee>1.00</Fee>
<userCode>002</userCode><productCode>002</productCode><Fee>2.00</Fee>
<userCode>003</userCode><productCode>003</productCode><Fee>3.00</Fee>;
I need transform this file to :
<Fee>1.00</Fee><productCode>001</productCode>
<Fee>2.00</Fee><productCode>002</productCode>
<Fee>3.00</Fee><productCode>003</productCode>
I think I need read first and then write. How to do this in WSO2?

I hope you have a Top level element which wraps this data.
Making this a proper xml.
ex :
<data><userCode>001</userCode><productCode>001</productCode><Fee>1.00</Fee>... </data>
Steps
1) Configure VFS transport sender and receiver in axis2.xml
2) Engage ApplicationXML Message builder and formatter for your content type (This can be any ex : file/xml)
3) Configure a VFS proxy to listen to this content type in a given directory.
4) When message comes use XSLT mediator to do the transformation
5) Use VFS sender to store the transformed file.
thanks,
Charith

Related

Spring Batch FlatFileItemWriter does not write data to a file

I am new to Spring Batch application. I am trying to use FlatFileItemWriter to write the data into a file. Challenge is application is creating the file on a given path, but, now writing the actual content into it.
Following are details related to code:
List<String> dataFileList : This list contains the data that I want to write to a file
FlatFileItemWriter<String> writer = new FlatFileItemWriter<>();
writer.setResource(new FileSystemResource("C:\\Desktop\\test"));
writer.open(new ExecutionContext());
writer.setLineAggregator(new PassThroughLineAggregator<>());
writer.setAppendAllowed(true);
writer.write(dataFileList);
writer.close();
This is just generating the file at proper place but contents are not getting written into the file.
Am I missing something? Help is highly appreciated.
Thanks!
This is not a proper way to use Spring Batch Writer and writer data. You need to declare bean of Writer first.
Define Job Bean
Define Step Bean
Use your Writer bean in Step
Have a look at following examples:
https://github.com/pkainulainen/spring-batch-examples/blob/master/spring-boot/src/main/java/net/petrikainulainen/springbatch/csv/in/CsvFileToDatabaseJobConfig.java
https://spring.io/guides/gs/batch-processing/
You probably need to force a sync to disk. From the docs at https://docs.spring.io/spring-batch/trunk/apidocs/org/springframework/batch/item/file/FlatFileItemWriter.html,
setForceSync
public void setForceSync(boolean forceSync)
Flag to indicate that changes should be force-synced to disk on flush. Defaults to false, which means that even with a local disk changes could be lost if the OS crashes in between a write and a cache flush. Setting to true may result in slower performance for usage patterns involving many frequent writes.
Parameters:
forceSync - the flag value to set

Akka stream each element to ftp sink

I want to write each element in an Akka stream to a (different) FTP file. Using Alpakka I can write each element to the same file using an FTP sink. However I can not seem to figure out how to write each element to a different file.
source.map(el -> /* to byte string */).to(Ftp.toPath("/file.xml", settings));
So every el should end up in a different file.
If you want to use the Alpakka FTP sink, you have to do something along the lines of
def sink(n: String): Sink[String, NotUsed] = Ftp.toPath(s"$n.txt", settings)
source.runForeach(s ⇒ Source.single(s).runWith(sink(s)))
otherwise, you'll need to create your own sink that establishes an FTP connection and writes the data as part of the input handler. You'll need to create your own graph stage to do it. More info about this can be found in the docs.

Mime4j - Sending email through SMTP Server

I've implemented a solution to parse Email Files (.eml) into objects using Mime4J. The process parses an email file, create an object and write a new file to disk.
I was wondering if is possible to send the MimeMessage of Mime4J through Transport.send(mimeMessage) instead to create a new file.
The simplest approach would be to use the Mime4J Message.writeTo method to write the message to a ByteArrayOutputStream, then wrap the byte array with a ByteArrayInputStream and use that to construct a JavaMail MimeMessage object.
A more complex but more efficient approach would be to create a class that subclasses MimeMessage and delegates most of the methods to the corresponding methods on the Mime4J Message object.

Hadoop Map Whole File in Java

I am trying to use Hadoop in java with multiple input files. At the moment I have two files, a big one to process and a smaller one that serves as a sort of index.
My problem is that I need to maintain the whole index file unsplitted while the big file is distributed to each mapper. Is there any way provided by the Hadoop API to make such thing?
In case if have not expressed myself correctly, here is a link to a picture that represents what I am trying to achieve: picture
Update:
Following the instructions provided by Santiago, I am now able to insert a file (or the URI, at least) from Amazon's S3 into the distributed cache like this:
job.addCacheFile(new Path("s3://myBucket/input/index.txt").toUri());
However, when the mapper tries to read it a 'file not found' exception occurs, which seems odd to me. I have checked the S3 location and everything seems to be fine. I have used other S3 locations to introduce the input and output file.
Error (note the single slash after the s3:)
FileNotFoundException: s3:/myBucket/input/index.txt (No such file or directory)
The following is the code I use to read the file from the distributed cache:
URI[] cacheFile = output.getCacheFiles();
BufferedReader br = new BufferedReader(new FileReader(cacheFile[0].toString()));
while ((line = br.readLine()) != null) {
//Do stuff
}
I am using Amazon's EMR, S3 and the version 2.4.0 of Hadoop.
As mentioned above, add your index file to the Distributed Cache and then access the same in your mapper. Behind the scenes. Hadoop framework will ensure that the index file will be sent to all the task trackers before any task is executed and will be available for your processing. In this case, data is transferred only once and will be available for all the tasks related your job.
However, instead of add the index file to the Distributed Cache in your mapper code, make your driver code to implement ToolRunner interface and override the run method. This provides the flexibility of passing the index file to Distributed Cache through the command prompt while submitting the job
If you are using ToolRunner, you can add files to the Distributed Cache directly from the command line when you run the job. No need to copy the file to HDFS first. Use the -files option to add files
hadoop jar yourjarname.jar YourDriverClassName -files cachefile1, cachefile2, cachefile3, ...
You can access the files in your Mapper or Reducer code as below:
File f1 = new File("cachefile1");
File f2 = new File("cachefile2");
File f3 = new File("cachefile3");
You could push the index file to the distributed cache, and it will be copied to the nodes before the mapper is executed.
See this SO thread.
Here's what helped me to solve the problem.
Since I am using Amazon's EMR with S3, I have needed to change the syntax a bit, as stated on the following site.
It was necessary to add the name the system was going to use to read the file from the cache, as follows:
job.addCacheFile(new URI("s3://myBucket/input/index.txt" + "#index.txt"));
This way, the program understands that the file introduced into the cache is named just index.txt. I also have needed to change the syntax to read the file from the cache. Instead of reading the entire path stored on the distributed cache, only the filename has to be used, as follows:
URI[] cacheFile = output.getCacheFiles();
BufferedReader br = new BufferedReader(new FileReader(#the filename#));
while ((line = br.readLine()) != null) {
//Do stuff
}

dyanamically change the database name in SqlMapConfig.xml file

I want to change the database name in SqlMapConfig.xml file from the application, does any one help me?
You can override the database when you instantiate the Ibatis mapper instance; I do this for switching between debug and release builds of the application and hence accessing a different target database.
If your xml file is in an assembly called DatalayerAssembly for example, you might have a method for returning your new Ibatis instance based on a database name like this:
public IBatisNet.DataMapper.ISqlMapper CreateNewIbatis(
String serverName,
String databaseName)
{
// Load the config file (embedded resource in assembly).
System.Xml.XmlDocument xmlDoc = IBatisNet.Common.Utilities.Resources.GetEmbeddedResourceAsXmlDocument("SqlMapConfig.xml, DatalayerAssembly");
// Overwrite the connectionString in the XmlDocument, hence changing database.
// NB if your connection string needs extra parameters,
// such as `Integrated Security=SSPI;` for user authentication,
// then append that to InnerText too.
xmlDoc["sqlMapConfig"]["database"]["dataSource"]
.Attributes["connectionString"]
.InnerText = "Server=" + serverName + ";Database=" + databaseName;
// Instantiate Ibatis mapper using the XmlDocument via a Builder,
// instead of Ibatis using the config file.
IBatisNet.DataMapper.Configuration.DomSqlMapBuilder builder = new IBatisNet.DataMapper.Configuration.DomSqlMapBuilder();
IBatisNet.DataMapper.ISqlMapper ibatisInstance = builder.Configure(xmlDoc);
// Now use the ISqlMapper instance ("ibatisInstance") as normal.
return ibatisInstance;
}
I'm using this approach in Ibatis 1.6.2.0 on .Net but the exact SqlMap config file might vary depending by version. Either way the approach is the same; you just might need a different Xml path (i.e. the bit that reads ["sqlMapConfig"]["database"] etc may need changing for your config file)
Hope that helps.

Resources