It's not possible in sfdc to send files more than 12 mb because we have an asynchronous heap size limit of 12 MB. but still I don't want to use any app exchange app. so how can I achieve this. and in S3 file object, I am able to upload 20 MB
Reverse it. Pull instead of push.
Send some notification and have a program running on AWS that would pull the document by ContentVersion's ID or something. (REST API call to /services/data/v52.0/sobjects/ContentVersion/put-id-here/VersionData)
You could send session id in the notification or have the credentials in the program.
Related
So currently I am doing a synchronous call to mulesoft which returns raw image(no encoding is done) and then storing the image in a document.So when ever we are getting bigger images more than 6 MB it is hitting the governerlimit for max size.So wanted to know is there a way to get a reduced or compressed image
I have no idea if Mule has anything to preprocess images, compress...
In apex you could try to make the operation asynchronous to benefit from 22 mb limit. But there wil be no UI element for it anymore, your component / user would have to periodically check if the file got saved or something.
you could always change the direction. Make Mule push to salesforce over standard API instead of apex code pulling from Mule. From what I remember standard files API is good for up to 2GB.
Maybe send some notification to mule that you want file XYZ attached to account 123, mule would insert contentversion, contentdocumentlink? And have apex periodically check.
And when file is not needed - nightly job to delete files created by "Mr mule" over a week ago?
I am able to send files up to 4.5 MB from salesforce using an HTTP request to AWS s3 bucket. How can we send files as large as 50 MB from salesforce using HTTP request?
You need to reverse the flow. Apex has 6 MB RAM limit (12 MB if you use async apex, 36 MB if it's inbound email handler). 50 MB * 133% (base64-encoding of binary payload) = 66.5 MB, way over the limit.
Send some notification (custom callout? Platform event?) to other system, make it log in and pull the file using standard APIs, without invoking custom code and worrying about limits. In the message you could send them the REST API download URLs ({instance.my.salesforce.com}/services/data/v50.0/sobjects/ContentVersion/068.../VersionData), if they access this in REST API they'd get binary payload in response. If they'd rather have it base64-encoded - SOAP API query of this field in ContentVersion table might work better.
See upload 20 mb file to 3rd Party services from Salesforce and https://stackoverflow.com/a/56268939/313628
We have a few nodejs servers where the details and payload of each request needs to be logged to SQL Server for reporting and other business analytics.
The amount of requests and similarity of needs between servers has me wanting to approach this with an centralized logging service. My first instinct is to use something like Amazon SQS and let it act as a buffer with either SQL Server directly or build a small logging server which would make database calls directed by SQS.
Does this sound like a good use for SQS or am I missing a widely used tool for this task?
The solution will really depend on how much data you're working with, as each service has limitations. To name a few:
SQS
First off since you're dealing with logs, you don't want duplication. With this in mind you'll need a FIFO (first in first out) queue.
SQS by itself doesn't really invoke anything. What you'll want to do here is setup the queue, then make a call to submit a message via the AWS JS SDK. Then when you get the message back in your callback, get the message ID and pass that data to an invoked Lambda function (you can write those in NodeJS as well) which stores the info you need in your database.
That said it's important to know that messages in an SQS queue have a size limit:
The minimum message size is 1 byte (1 character). The maximum is
262,144 bytes (256 KB).
To send messages larger than 256 KB, you can use the Amazon SQS
Extended Client Library for Java. This library allows you to send an
Amazon SQS message that contains a reference to a message payload in
Amazon S3. The maximum payload size is 2 GB.
CloudWatch Logs
(not to be confused with the high level cloud watch service itself, which is more sending metrics)
The idea here is that you submit event data to CloudWatch logs
It also has a limit here:
Event size: 256 KB (maximum). This limit cannot be changed
Unlike SQS, CloudWatch logs can be automated to pass log data to Lambda, which then can be written to your SQL server. The AWS docs explain how to set that up.
S3
Simply setup a bucket and have your servers write out data to it. The nice thing here is that since S3 is meant for storing large files, you really don't have to worry about the previously mentioned size limitations. S3 buckets also have events which can trigger lambda functions. Then you can happily go on your way sending out logo data.
If your log data gets big enough, you can scale out to something like AWS Batch which gets you a cluster of containers that can be used to process log data. Finally you also get a data backup. If your DB goes down, you've got the log data stored in S3 and can throw together a script to load everything back up. You can also use Lifecycle Policies to migrate old data to lower cost storage, or straight remove it all together.
Im writing a single-page-web-app (angularJs) and a server back-end (node.js). The communication between them is done via REST.
Currently im trying to implement the following scenario:
Upload big files from browser to S3 public bucket.
Copy uploaded file to private bucket on S3
Transcode uploaded file to HTML 5 compatible format (AWS Elastic Transcoder)
Store Meta-Object about the file in DB to access later
I'm racking my brains to get a well working design of the communication/ data-workflow between server and client, but always got stuck at the following questions?
Store file meta-object at the end or at the beginning of the process. If it is at the beginning, i have to store and handle some state information?
Who should start copying uploaded files to private bucket. Server or client? If it is the server, how can the client get informed about the job succeeded?
Who starts the transcoding process? If it is the server, how can the client get informed about the job succeeded?
How would you do this?
there is a pretty good tutorial which describes the use case you are planning to implement: http://www.bitcodin.com/blog/2015/02/create-mpeg-dash-hls-content-for-amazon-s3-and-cloudfront/
If your transcoding system has a RESTfull API (like bitcodin which is used in this tutorial, or any other service) you can do your application also client-side and use the API calls to get the state of your transcodings, etc. However, using the API you can do the same also server-side, whatever fits better for you.
I personally would store the metadata infos at the beginning of the process, as this is the point of time where you generate the "asset" in your database/CMS/etc.
Will the response timer on google app engine start upon submitting the web page's form?
If I'm going to upload a file that is greater than 1MB, I could split the files to 1MB to fit in the limitation of the Google App Engine Datastore. Now, my concern is if the client's internet connection is slow, it would eat up the 30 seconds timer right? If this is the case, it is impossible to upload large files with slow connection?
The 30 second response time limit only applies to code execution. So the uploading of the actual file as part of the request body is excluded from that. The timer will only start once the request is fully sent to the server by the client, and your code starts handling the submitted request. Hence it doesn't matter how slow your client's connection is.
As an side note, Instead of splitting your file into multiple parts, try using the blobstore. I am using it for images and it raises the storage limit to 50MB. (Remember to enable billing to get access to the blobstore)