I'm trying to upload a file using multipart/form-data to a Camel route.
All is good, however, I can't get the original file name.
Camel version is: 3.14.1
Update
With the following modification to the route. I managed to process binary files (getting the file name and storing them). However, with text files, the file is appended with the boundary footer:
------WebKitFormBoundary7BH9nQ2RqDXvTRAJ--
The route definition:
rest("/v1/file-upload-form")
.post()
.consumes(MediaType.MULTIPART_FORM_DATA_VALUE)
.route()
.process((exchange) -> {
InputStream is = exchange.getIn().getBody(InputStream.class);
MimeBodyPart mimeMessage = new MimeBodyPart(is);
DataHandler dh = mimeMessage.getDataHandler();
exchange.getIn().setBody(dh.getInputStream());
exchange.getIn().setHeader(Exchange.FILE_NAME, dh.getName());
})
.to("file://" + incomingFolder);
Thank you in advance
Edwardo
Edit: Since you have everything else already working, I'd recommend the Stream Caching option.
As Nicolas suggested, checkout Camel's MIME Multipart data format.
Also, the reason you're getting "Missing start boundary" is because your processor is consuming the InputStream. You can try to reset() it, but it might be better to just consume the InputStream once, or enable Stream Caching.
Instead of stream caching, you could also just convert the stream to a string. Before your processor, add:
.convertBodyTo(String.class)
The string can be read over and over. If you still get the missing start boundary error, try logging the body before the unmarshal operation. Make sure the message is intact and that it indeed contains the start boundary.
Related
I am aiming to take a file a user attaches through an Lightning Component and create a document object containing the data.
So far I have overcome the request size limits by chunking the data being uploaded into 1MB chunks. When the Apex Aura method receives these chunks of data it will either create a new document (if it is the first chunk), or will retrieve the existing document and add the new chunk to the end.
Data is received Base64 encoded, and then decoded server-side.
As the document data is stored as a Blob, the original file contents will be read as a String, and then appended with the chunk received. The new contents are then converted back into a Blob to be stored within the ContentVersion object.
The problem I'm having is that strings in Apex have a maximum length of 6,000,000 or so. Whenever the file size exceeds 6MB, this limit is hit during the concatenation, and will cause the file upload to halt.
I have attempted to avoid this limit by converting the Blob to a String only when necessary for the concatenation (as suggested here https://developer.salesforce.com/forums/?id=906F00000008w9hIAA) but this hasn't worked. I'm guessing it was patched because it's still technically allocating a string larger then the limit.
Code's really simple when appending so far:
ContentVersion originalDocument = [SELECT Id, VersionData FROM ContentVersion WHERE Id =: <existing_file_id> LIMIT 1];
Blob originalData = originalDocument.VersionData;
Blob appendedData = EncodingUtil.base64Decode(<base_64_data_input>);
Blob newData = Blob.valueOf(originalData.toString() + appendedData.toString());
originalDocument.VersionData = newData;
You will have hard time with it.
You could try offloading the concatenation to asynchronous process (#future/Queueable/Schedulable/Batchable), they'll have 12MB RAM instead of 6. Could buy you some time.
You could try cheating by embedding an iframe (Visualforce or lightning:container tag? Or maybe a "canvas app") that would grab your file and do some manual JavaScript magic calling normal REST API for document upload: https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/dome_sobject_insert_update_blob.htm (last code snippet is about multiple documents). Maybe jsforce?
Can you upload it somewhere else (SharePoint? Heroku?) and have that system call into SF to push them (no Apex = no heap size limit). Or even look "Files Connect" up.
Can you send an email with attachments? Crude but if you write custom Email-to-Case handler class you'll have 36 MB of RAM.
You wrote "we needed multiple files to be uploaded and the multi-file-upload component provided doesn't support all extensions". That may be caused by these:
In Experience Builder sites, the file size limits and types allowed follow the settings determined by site file moderation.
lightning-file-upload doesn't support uploading multiple files at once on Android devices.
if the Don't allow HTML uploads as attachments or document records security setting is enabled for your organization, the file uploader cannot be used to upload files with the following file extensions: .htm, .html, .htt, .htx, .mhtm, .mhtml, .shtm, .shtml, .acgi, .svg.
I am able to open and stream the file no issue by using the following, however I need to be able to use the file information that is stored inside the bucket.
const db = connection.connections[0].db
const bucket = new mongoose.mongo.GridFSBucket(db, {
bucketName: bucketName
});
bucket.openDownloadStreamByName(filename).pipe(res)
For example I would like to be able to set the following
res.setHeader('Content-Type', (TYPE)),
res.setHeader('Content-Length', (LENGTH)),
I am wondering the following above allows options however I don't know if the pipe stops us from setting the content-type and length after it starts piping.
According to docs, no you can't get file info from stream but in source code seems you can.
According to this and this, you could get contentType by accessing
bucket.openDownloadStreamByName(...).s.files[0].contentType
or
bucket.openDownloadStreamByName(...).s.file?.contentType
Problem: The file is not consumed from the server
I am using
from("test")
.routeId("test")
.pollEnrich()
.simple("smb://myUrl?password=test&fileName=${in.headers.test}")
.aggregationStrategy((Exchange oldExchange, Exchange newExchange) -> {
//do things
return newExchange;
})
I have no error, I am sure that the url is ok, because when I am using the same url in the from(), the file gets consumed.
I don't understand what is happening here, I am using camel 2.24.0 and camel-extra:camel-jcifs:2.23.1. I have tried to use smb2 using the library from github.jborza.camel-smbj, still the same outcome.
I tried to debug, I can see in the GenericFileComponent class in the createEndpoint method, that the endpoint is correctly created, then I tried (in debug mode) to get the exchanges from my endpoint, I can get them successfully, further this will be a SmbEndpoint, when I try to get the exchanges from my smbEndpoint it returns exactly the needed file from the server, further a EventDrivenPollingConsumer is created for this endpoint, I had a look at it, is started (seems ok). When it hits the consumer.receive() from the PollEnricher it blocks, no file is consumed. I tried using a timeout, than returns null, so somehow cannot find the file, or the consumer is wrong, I honestly have no clue at this point.
I had a look here too: https://github.com/apache/camel/blob/b9a3117f19dd19abd2ea8b789c42c3e86fe4c488/core/camel-core/src/test/java/org/apache/camel/component/file/FileConsumePollEnrichFileTest.java
and I have played with delays
&consumer.initialDelay=100&consumer.delay=100&consumer.bridgeErrorHandler=true
Then I tried to implement with processor like here:
https://github.com/apache/camel/blob/b9a3117f19dd19abd2ea8b789c42c3e86fe4c488/core/camel-core/src/test/java/org/apache/camel/component/file/FileConsumePollEnrichFileUsingProcessorTest.java
The same result :(
At some point the file was consumed, suddenly, but this happened only once, I cannot understand this behavior.
Sounds that you have a readlock problem, can you find any files in .done with the same name as the file you try to consume?
I have a simple route which polls zip files from FTP server. The zip file consists of one file that needs processing and zero or more attachments.
I am trying to use ZipFileDataFormat for splitting and I'm able to split and route the items as desired i.e. send the processing file to the processor and other files to the aggregator endpoint.
The route looks like below:
from(sftp://username#server/folder/path?password=password&delay=600000)
.unmarshal(getZipFileDataFormat()).split(body(Iterator.class)).streaming()
.log("CamelSplitComplete :: ${header.CamelSplitComplete}")
.log("Split Size :: ${header.CamelSplitSize}")
.choice()
.when(header(MyConstants.CAMEL_FILE_NAME_HEADER).contains(".json"))
.to(JSON_ENDPOINT).endChoice()
.otherwise()
.to(AGGREGATOR_ENDPOINT)
.endChoice()
.end();
getZipFileDataFormat
private ZipFileDataFormat getZipFileDataFormat() {
ZipFileDataFormat zipFile = new ZipFileDataFormat();
zipFile.setUsingIterator(true);
return zipFile;
}
The splitting works fine. However, I can see in the logs that the two headers CamelSplitComplete and CamelSplitSize are not set correctly. Where CamelSplitComplete is always false, CamelSplitSize is not having any value.
Because of this, I am not able to aggregate based on the size. I am using eagerCheckCompletion() for getting the input exchange in the aggregator route. My aggregator route looks like below.
from(AGGREGATOR_ENDPOINT).aggregate(new ZipAggregationStrategy()).constant(true)
.eagerCheckCompletion().completionSize(header("CamelSplitSize"))to("file:///tmp/").end();
I read Apache Documentation that these headers are always set. Am I missing anything here? Any pointer in the right direction would be very helpful.
I was able to get the whole route to work. I had to add a sort of pre-processor which would set some essential headers (Outgoing file name and file count of the zip) I'd require for aggregation.
from(sftp://username#server/folder/path?password=password&delay=600000).to("file:///tmp/")
.beanRef("headerProcessor").unmarshal(getZipFileDataFormat())
.split(body(Iterator.class)).streaming()
.choice()
.when(header(Exchange.FILE_NAME).contains(".json"))
.to(JSON_ENDPOINT).endChoice()
.otherwise()
.to(AGGREGATOR_ENDPOINT)
.endChoice()
.end();
After that, the zip aggregation strategy worked as expected. Putting here the aggregation route just for completion of answer.
from(AGGREGATOR_ENDPOINT)
.aggregate(header(MyConstants.HEADER_OUTGOING_FILE_NAME), new ZipAggregationStrategy())
.eagerCheckCompletion().completionSize(header(MyConstants.HEADER_TOTAL_FILE_COUNT))
.setHeader(Exchange.FILE_NAME, simple("${header.outgoingFileName}"))
.to("file:///tmp/").end();
I have a Camel route that should return a file in the response, which is created based on the request data. While this works fine with the following (greatly simplified) route, the problem is that I need to first create an actual file on the server that I can then add to the exchange body.
As I don't want these file piling up on the disk, I would prefer to either not create them at all or delete them directly from the same route.
The only way around this I currently see is to have a regular cleanup job that deletes these temporary files.
Any suggestions on how to solve this in a better way?
from("cxfrs://...")
.process(exchange -> {
File file = new File("out.pdf");
// write data to new FileOutputStream(file);
exchange.getIn().setBody(file);
})
The response content type is application/octet-stream.