Serving a file efficiently using Play 2.3 - file

I need to serve some content from an Action in the form of a file: basically, I am creating CSV content on the fly and sending it to the client.
I cannot do it using sendFile, since the file does not really exist; I tried using the chunked transfer, but I get a really slow response (in localhost I got the file at about 100KB/s, which I think is really strange).
Is there a way for me to set the content type and write the response "line by line", without having to specify the content length "a priori"?

Here's one way using a simple predefined Enumerator that will produce the response from bytes written to an OutputStream:
def csv = Action {
val enumerator = Enumerator.outputStream { out =>
out.write(...)
// Keep writing to the Enumerator
out.close()
}
Ok.chunked(enumerator.andThen(Enumerator.eof)).withHeaders(
"Content-Type" -> "text/csv",
"Content-Disposition" -> s"attachment; filename=test.csv"
)
}
This is simple enough for relatively small files (or if the process of generating the file is slow by nature), however note that from the documentation this has no back-pressure, reading a large file into the OutputStream can quickly fill up memory if the client can't download it quickly enough.
Update:
After testing this some more it seems like the size of the Byte arrays you write to the OutputStream make a huge difference in throughput.
Using this sample stream:
val s = Stream.continually(0.toByte)
Writing in chunks of 1KB to the OutputStream like this resulted in 6MB/s of throughput:
(0 until 1024*1024).foreach{i =>
out.write(s.take(1024).toArray)
}
However if I only write 10 bytes at a time, the throughput slows to less than 100KB/s. So my suggestion for using this method to write CSVs in a chunked form would be to write multiple rows at a time to the OutputStream rather than one row at a time.

Related

Spring Batch FlatFileItemWriter does not write data to a file

I am new to Spring Batch application. I am trying to use FlatFileItemWriter to write the data into a file. Challenge is application is creating the file on a given path, but, now writing the actual content into it.
Following are details related to code:
List<String> dataFileList : This list contains the data that I want to write to a file
FlatFileItemWriter<String> writer = new FlatFileItemWriter<>();
writer.setResource(new FileSystemResource("C:\\Desktop\\test"));
writer.open(new ExecutionContext());
writer.setLineAggregator(new PassThroughLineAggregator<>());
writer.setAppendAllowed(true);
writer.write(dataFileList);
writer.close();
This is just generating the file at proper place but contents are not getting written into the file.
Am I missing something? Help is highly appreciated.
Thanks!
This is not a proper way to use Spring Batch Writer and writer data. You need to declare bean of Writer first.
Define Job Bean
Define Step Bean
Use your Writer bean in Step
Have a look at following examples:
https://github.com/pkainulainen/spring-batch-examples/blob/master/spring-boot/src/main/java/net/petrikainulainen/springbatch/csv/in/CsvFileToDatabaseJobConfig.java
https://spring.io/guides/gs/batch-processing/
You probably need to force a sync to disk. From the docs at https://docs.spring.io/spring-batch/trunk/apidocs/org/springframework/batch/item/file/FlatFileItemWriter.html,
setForceSync
public void setForceSync(boolean forceSync)
Flag to indicate that changes should be force-synced to disk on flush. Defaults to false, which means that even with a local disk changes could be lost if the OS crashes in between a write and a cache flush. Setting to true may result in slower performance for usage patterns involving many frequent writes.
Parameters:
forceSync - the flag value to set

Bro: Disable ALL log generation

I created a bro script, with the objective of extract all files for all posible protocols from a pcap file. But I dont want to write all logs. Bro create a log file for each protocol. Example: 'http.log', 'smtp.log', etc. Even a 'weird.log' is generated. My pcap files are large (20gb), so, each log file contains over 30mb of information. This log generation reduce the performance of the file extraction.
I can disable the 'conn.log' with the line Log::disable_stream(Conn::LOG) but, what about all protocol logging??
This is my script
#load base/files/extract
event bro_init()
{
Log::disable_stream(Conn::LOG);
}
event file_sniff(f: fa_file, meta: fa_metadata)
{
local ext = "";
if ( meta?$mime_type )
ext = split_string(meta$mime_type, /\//)[1];
local fname = fmt("%s-%s.%s", f$source, f$id, ext);
Files::add_analyzer(f, Files::ANALYZER_EXTRACT, [$extract_filename=fname]);
}
You can use the none writer like this:
bro -r packets.pcap Log::default_writer=Log::WRITER_NONE
I'm not totally convinced that writing these logs is harming your performance in any real way though. Typically, writing the files to disk is what causes the biggest overhead.
Here's a way to turn off whatever logging's been turned on (prior to bro_init), without having to know which stream IDs are relevant:
event bro_init()
{
# We don't want any output other than from this script.
for (id in Log::active_streams)
Log::disable_stream(id);
}
This construct makes me twitch a little about modifying a table while iterating over it, but it seems to work and I can't actually find any way to peek at one key from a table without doing an iteration. I suppose one could write
event bro_init()
{
while (|Log::active_streams|) {
for (id in Log::active_streams) {
Log::disable_stream(id);
break;
}
}
}
but that's hideous and I'm not going to use it unless I discover that I have to.
I achieved this with this line of code in main.bro:
Log::remove_filter(Conn::LOG, "default");

How do I append to a file in an Azure storage file share?

I want to write entries to a log file stored in Azure file storage. I currently have this:
var log = "My log entry";
var client = _storageAccount.CreateCloudFileClient();
var share = client.GetShareReference(Config.LogShare);
share.CreateIfNotExists();
var root = share.GetRootDirectoryReference();
var logfile = root.GetFileReference("log.txt");
if (!logfile.Exists()) logfile.Create(0);
// What goes here to append to the file...?
I can see plenty of examples of how to do this with Blobs, or how to upload an entire file, but how do I just append to an existing file?
I have tried this:
var buffer = Encoding.GetEncoding("UTF-8").GetBytes(log.ToCharArray());
using (var fileStream = logfile.OpenWrite(0)) {
fileStream.Write(buffer, (int)logfile.Properties.Length, buffer.Length);
}
But then I get this error:
The remote server returned an error: (416) The range specified is invalid for the current size of the resource..
I managed to work this out myself. You just need to increase the size of the file by the number of new bytes you want to write to it, and then write the new data to that new empty space at the end of the file, like this:
var client = _storageAccount.CreateCloudFileClient();
var share = client.GetShareReference(Config.LogShare);
share.CreateIfNotExists();
var root = share.GetRootDirectoryReference();
var logfile = root.GetFileReference("log.txt");
if (!logfile.Exists()) logfile.Create(0);
var buffer = Encoding.UTF8.GetBytes($"{log}\r\n");
logfile.Resize(logfile.Properties.Length + buffer.Length);
using (var fileStream = logfile.OpenWrite(null)) {
fileStream.Seek(buffer.Length * -1, SeekOrigin.End);
fileStream.Write(buffer, 0, buffer.Length);
}
You can do this with blobs https://blogs.msdn.microsoft.com/windowsazurestorage/2015/04/13/introducing-azure-storage-append-blob/
Shame it doesn't work with files too
Azure file storage REST API doesn't support appending to an existing file. To achieve this, please mount the file share to your machine as a drive, and append to the file just like simple local files.
Actually, I don't think you really need appending functionality per your code above. You can specify the file size in CloudFile.OpenWrite() / CloudFile.Create(), or try CloudFile.UploadFromStream() instead of CloudFile.OpenWrite().
This error could also be due to multi-threaded access.
I bet if you tried to lock the file before you access it, you will not face this problem.
There are many ways to update the file.
Since you already managed to get the share, the root, the folder and the file.. Here is a portion of my code that worked for me.
if (!fileLock.IsWriteLockHeld) fileLock.EnterWriteLock();
try
{
using (var stream = new MemoryStream(content, false))
{
file.UploadFromStream(stream, null, options);
}
}
catch (Exception ex)
{
File.AppendAllText(FileName, ex.ToString());
}
finally
{
if (fileLock.IsWriteLockHeld)
fileLock.ExitWriteLock();
}
Where fileLock is declared as:
protected ReaderWriterLockSlim fileLock = new ReaderWriterLockSlim();
Having said that, I am not saying that this is the best way ever to do it.
The two things I would like you to keep in mind :
1-Lock the resource that is likely to be accessed by more than one thread (That is so common in AZURE)
2- Get familiar with asynchronous methods that Azure provides.. use them when they suit well.
Coming back to your original problem about appending to the existing file..
All the methods of the CloudFile will overwrite the existing file. Cloud Files are not for frequent writing, and they indeed impact performance if you keep writing on them frequently, add the lock impact on performance, they will be horrible.
Cloud files are meant to store big bulk of data once and for all, if you want to add another bulk you have the choice of creating another file.
Have all your data with the client till they reach some size and create an algorith to select the file name and upload them all at once.

Hadoop Map Whole File in Java

I am trying to use Hadoop in java with multiple input files. At the moment I have two files, a big one to process and a smaller one that serves as a sort of index.
My problem is that I need to maintain the whole index file unsplitted while the big file is distributed to each mapper. Is there any way provided by the Hadoop API to make such thing?
In case if have not expressed myself correctly, here is a link to a picture that represents what I am trying to achieve: picture
Update:
Following the instructions provided by Santiago, I am now able to insert a file (or the URI, at least) from Amazon's S3 into the distributed cache like this:
job.addCacheFile(new Path("s3://myBucket/input/index.txt").toUri());
However, when the mapper tries to read it a 'file not found' exception occurs, which seems odd to me. I have checked the S3 location and everything seems to be fine. I have used other S3 locations to introduce the input and output file.
Error (note the single slash after the s3:)
FileNotFoundException: s3:/myBucket/input/index.txt (No such file or directory)
The following is the code I use to read the file from the distributed cache:
URI[] cacheFile = output.getCacheFiles();
BufferedReader br = new BufferedReader(new FileReader(cacheFile[0].toString()));
while ((line = br.readLine()) != null) {
//Do stuff
}
I am using Amazon's EMR, S3 and the version 2.4.0 of Hadoop.
As mentioned above, add your index file to the Distributed Cache and then access the same in your mapper. Behind the scenes. Hadoop framework will ensure that the index file will be sent to all the task trackers before any task is executed and will be available for your processing. In this case, data is transferred only once and will be available for all the tasks related your job.
However, instead of add the index file to the Distributed Cache in your mapper code, make your driver code to implement ToolRunner interface and override the run method. This provides the flexibility of passing the index file to Distributed Cache through the command prompt while submitting the job
If you are using ToolRunner, you can add files to the Distributed Cache directly from the command line when you run the job. No need to copy the file to HDFS first. Use the -files option to add files
hadoop jar yourjarname.jar YourDriverClassName -files cachefile1, cachefile2, cachefile3, ...
You can access the files in your Mapper or Reducer code as below:
File f1 = new File("cachefile1");
File f2 = new File("cachefile2");
File f3 = new File("cachefile3");
You could push the index file to the distributed cache, and it will be copied to the nodes before the mapper is executed.
See this SO thread.
Here's what helped me to solve the problem.
Since I am using Amazon's EMR with S3, I have needed to change the syntax a bit, as stated on the following site.
It was necessary to add the name the system was going to use to read the file from the cache, as follows:
job.addCacheFile(new URI("s3://myBucket/input/index.txt" + "#index.txt"));
This way, the program understands that the file introduced into the cache is named just index.txt. I also have needed to change the syntax to read the file from the cache. Instead of reading the entire path stored on the distributed cache, only the filename has to be used, as follows:
URI[] cacheFile = output.getCacheFiles();
BufferedReader br = new BufferedReader(new FileReader(#the filename#));
while ((line = br.readLine()) != null) {
//Do stuff
}

Loading large firebird datafile table into a DataSet

I have a Firbird 1.0 data file weighting aprox 25 GB that I am working with it. It has a table which has stored documents and doc's pics as blob. So, I am asking is it possible to open such big data file using fib datasets, i firstly tried to open dataset in runtime = no success as grid was empty so another try was to set it active in design mode which it was also unable to open as it's active property is set to true but no fetched data in grid!
Have you any idea to make it work ? Do I have to set any blob cashe options?
or it is not possible at all?
Now I am developing using my laptop computer (Win 7 x64 4GB Ram ), and later it'll be deployed to my server machine!
I've fixed it!
So another my question is about loading blob data using stream to a TImage component
i am doing like this but it pops out an Access violation
here is my code which you may look at
DM->stImage->Active=true;
try {
TMemoryStream *ms=new TMemoryStream();
TStream *ps=DM->stImage->CreateBlobStream(DM->stImage->FieldByName("PHOTO") ,bmRead);
ms->Position=0;
ms->CopyFrom(ps,ps->Size);
ms->SaveToFile("c:\\1.jpg");
// imgPass->Picture->LoadFromStream(ms);
imgPass->Picture->Graphic->LoadFromStream(ps);
delete ms;
delete ps;
}
catch (Exception &e) {
ShowMessage(e.ToString());
}
it can save it but imgPass->Picture->Graphic->LoadFromStream(ps); does not work!
what could be a problem?
To avoid the AV you need to reset the stream position, that was moved forward during the call to "CopyFrom" function.
So, your code should look like (only the relevant lines):
ms->CopyFrom(ps,ps->Size);
ms->SaveToFile("c:\\1.jpg");
ps->Position = 0; //<<<<<<<<<< here we reset the stream position
imgPass->Picture->Graphic->LoadFromStream(ps);
//imgPass->Picture->Bitmap->LoadFromStream(ps); // <<< if a bitmap and not JPEG
Hope this helps you.
P.S.: this question should be tagged C++ (or C++Builder) because it is not only a database subject.

Resources