I am able to send files up to 4.5 MB from salesforce using an HTTP request to AWS s3 bucket. How can we send files as large as 50 MB from salesforce using HTTP request?
You need to reverse the flow. Apex has 6 MB RAM limit (12 MB if you use async apex, 36 MB if it's inbound email handler). 50 MB * 133% (base64-encoding of binary payload) = 66.5 MB, way over the limit.
Send some notification (custom callout? Platform event?) to other system, make it log in and pull the file using standard APIs, without invoking custom code and worrying about limits. In the message you could send them the REST API download URLs ({instance.my.salesforce.com}/services/data/v50.0/sobjects/ContentVersion/068.../VersionData), if they access this in REST API they'd get binary payload in response. If they'd rather have it base64-encoded - SOAP API query of this field in ContentVersion table might work better.
See upload 20 mb file to 3rd Party services from Salesforce and https://stackoverflow.com/a/56268939/313628
Related
So currently I am doing a synchronous call to mulesoft which returns raw image(no encoding is done) and then storing the image in a document.So when ever we are getting bigger images more than 6 MB it is hitting the governerlimit for max size.So wanted to know is there a way to get a reduced or compressed image
I have no idea if Mule has anything to preprocess images, compress...
In apex you could try to make the operation asynchronous to benefit from 22 mb limit. But there wil be no UI element for it anymore, your component / user would have to periodically check if the file got saved or something.
you could always change the direction. Make Mule push to salesforce over standard API instead of apex code pulling from Mule. From what I remember standard files API is good for up to 2GB.
Maybe send some notification to mule that you want file XYZ attached to account 123, mule would insert contentversion, contentdocumentlink? And have apex periodically check.
And when file is not needed - nightly job to delete files created by "Mr mule" over a week ago?
It's not possible in sfdc to send files more than 12 mb because we have an asynchronous heap size limit of 12 MB. but still I don't want to use any app exchange app. so how can I achieve this. and in S3 file object, I am able to upload 20 MB
Reverse it. Pull instead of push.
Send some notification and have a program running on AWS that would pull the document by ContentVersion's ID or something. (REST API call to /services/data/v52.0/sobjects/ContentVersion/put-id-here/VersionData)
You could send session id in the notification or have the credentials in the program.
I have a website where users need to see a link to download a file (approximately 100 MB size) only after authenticating (userid/password) themselves in the website. Users should not be able to copy the link and use it later without authentication.
Can a REST API with (Transfer-Encoding: chunked) return such a huge file size without being timed-out?
Note: We currently have java springboot based APIs for some basic functions returning JSON (text) response
How can I prevent the URL from being accessed later without authentication ?
Any approach to generate dynamic URLs which will be valid only for few mins ? Should this logic be in the app server or CMS like Drupal have this feature ?
I am open to store this file in DB or Drupal or a file server as per the recommended approach for securely download the file. This file is not text/image/pdf, it will be a binary file.
Note: My system does not use any Public Cloud like AWS/GCP/Azure
I have a hybrid application built with Ionic, which makes requests to a REST service. I need to limit the amount of information sent daily to the service in order to save the user's mobile data when used in 3G / 4G. Is there a way to measure the amount of information in kb or mb in this context?
You could keep track of this in server side, and include a field in your json response with this data for every request.
Will the response timer on google app engine start upon submitting the web page's form?
If I'm going to upload a file that is greater than 1MB, I could split the files to 1MB to fit in the limitation of the Google App Engine Datastore. Now, my concern is if the client's internet connection is slow, it would eat up the 30 seconds timer right? If this is the case, it is impossible to upload large files with slow connection?
The 30 second response time limit only applies to code execution. So the uploading of the actual file as part of the request body is excluded from that. The timer will only start once the request is fully sent to the server by the client, and your code starts handling the submitted request. Hence it doesn't matter how slow your client's connection is.
As an side note, Instead of splitting your file into multiple parts, try using the blobstore. I am using it for images and it raises the storage limit to 50MB. (Remember to enable billing to get access to the blobstore)