I am trying to upload Image/Video files into S3 bucket from my React JS application. So I refered some of the React S3 uploader npm packages react-dropzone-s3-uploader , react-s3-uploader-multipart. But both are keep giving Errors while importing into React JS component. And I have already post this error message on my another stack question (please refer this qus). I would like to do this multipart upload directly from my React application to S3 bucket. If anyone knows the solution please share with me.
Thanks in advance.
The only lib which worked perfectly and supported AWS S3 multipart with minimum work was Uppy for me. Highly recommended to try out:
https://uppy.io/docs/aws-s3-multipart/
you will need to provide couple endpoints for it though, so read the docs. You will see "Companion" mentioned there, you can easily ignore it, provide 5 needed endpoints of your custom API and it will be all good. I would suggest to run the UI part, puth in some dummy URLs for these 5 functions and check network activity of the browser to faster understand how it works.
A function that calls the S3 Multipart API to create a new upload
A function that calls the S3 Multipart API to list the parts of a file that have already been uploaded
A function that generates a batch of signed URLs for the specified part numbers
A function that calls the S3 Multipart API to abort a Multipart upload, and removes all parts that have been uploaded so far
A function that calls the S3 Multipart API to complete a Multipart upload, combining all parts into a single object in the S3 bucket
Yet no matter what way you would build multipart upload, you will always need to start the upload, list parts, get signed URLs to upload each part, cancel the upload & complete. So it will never be 3 minutes task to build this, but with Uppy i had most of success.
You can use React Dropzone Uploader, which gives you file previews (including image thumbnails) out of the box, and also handles uploads for you.
Uploads have progress indicators, and they can be cancelled or restarted. The UI is fully customizable.
Here's an example of how to upload files directly to an S3 bucket, using pre-signed URLs.
Full disclosure: I wrote this library.
Here’s a way to do it full stack MERN with express file upload. Server code here is minimal. This might be helpful, if not, no worries!
https://link.medium.com/U1SdsoHMy2
Related
I need to create a Spring Boot WebFlux rest web service to act as a proxy between an angularjs app that shows a video stream and an endpoint at dacast.com that delivers m3u8 playlist-based content.
At this time, there is a video component in the angular app that takes the following uri and presents the content to the user. I plan to create a reactive webflux rest service, but am at a loss as to how to implement this proxy. There are a lot of posts online about viewing the HLS feed in HTML, but nothing about how to proxy between the stream and a consumer of it.
https://dcunilive11-lh.akamaihd.net/i/dlive_1#xxxxxx/master.m3u8
I believe that I need to download the master.m3u8 file, which will contain https endpoints that I can download as a Flux stream and pass along to the angular app. Does this make sense? I'd appreciate your help and tips...
Thanks,
Mike
The m3u8 file is a text file which contains some info about the video and links to the media streams as you say.
The simplest way for the angular app to play the video would be just to provide the link to the original m3u8 file to it directly, but I am guessing it can't reach that link for some reason in your use case.
Assuming this is correct, it sounds like your web service just needs to act as a proxy for the m3u8 file link and the media streams.
There are some instructions in the online Spring documentation for this - e.g.: https://cloud.spring.io/spring-cloud-gateway/1.0.x/multi/multi__building_a_gateway_using_spring_mvc.html
One thing that may be causing some confusion is that the HLS media streams are actually transferred between the client and the server as a series of client requests and server responses, i.e. similar to regular HTTP request/responses. They are not constantly streaming, i.e. something that that you might use a websocket to read.
I have a zipped file containing images which I am sending as response to a python REST API call. I want to create a rest application which consumes the python rest api in this manner: The response's content should be extracted without downloading (in browser side) and all the images should be displayed to the user. Is this possible? If yes, could you please help me in the implementation? I am unable to find help anywhere.
I think what you are trying to do is have a backend server (python) where zip files of images are hosted. You need to create an application (that could be in react) that
Send HTTP calls to the server get those .zip files.
Unzip them. How to unzip file on javascript
Display the images to the user. https://medium.com/better-programming/how-to-display-images-in-react-dfe22a66d5e7
I'm not sure what utf-8 has to do with this, but this is possible. A quick google gave me the results above.
I need to display/stream large video files in reactjs. These files are being uploaded to private s3 bucket by user using react form and flask.
I tried getObject method, but my file size is too large. get a signed url method required me to download the file.
I am new to AWS-python-react setup. What is the best/most efficient/least costly approach to display large video files in react?
AWS offers other streaming specific services but if you really want to get them off S3 you could retrieve the files using torrent which, with the right client/videoplayer would allow you to start playing them without having to download the whole file.
Since you mentioned you're using Python, you could do this using AWS SDK like so:
import boto3
s3 = boto3.client('s3')
response = client.get_object_torrent(
Bucket='my_bucket',
Key='/some_prefix/my_video.mp4'
)
The response object will have this format:
{
'Body': StreamingBody()
}
Full docs here.
Then you could use something like webtorrent to stream it on the frontend.
Two things to note about this approach (quoting docs):
Amazon S3 does not support the BitTorrent protocol in AWS Regions launched after May 30, 2016.
You can only get a torrent file for objects that are less than 5 GBs in size.
Is there any way that the image that is uploaded by the user be directly sent to cloud function without converting to string in Python?
Thank you in advance
Instead of sending the image to the Cloud Function, it's better if the image has been uploaded to GCS. This way, you can just provide the gsUri and the Cloud Function can handle the image file by using the cloud storage client library. For example, the OCR Tutorial follows this approach.
In this case, if a function fails then the image is preserved in Cloud Storage. By using a background function when the image is uploaded, you can follow the strategy for retrying background functions to have an "at-least-once" delivery.
If you want to use HTTP Functions instead, then the only recommended way to provide the image is by converting it to string and send it inline with the POST. Note that for production environments is way better to either have the file within the same platform or send it inline.
Yes, I've seen this question already, but I'm finding information that contradicts its accepted answer and Nick Johnson's blog on the GAE docs.
The docs talk about uploading more than one file at the same time - the function to get uploaded files returns a list:
The get_uploads() method returns a
list of BlobInfo objects, one for each
uploaded file in the request.
But everywhere I've looked, the going assumption is that only one file a time can be uploaded, and a new upload url needs to be created each time.
Is it even possible to upload more than one file at the same time using HTML5/Flash using Plupload?
Currently, the blobstore service upload URLs only support one file upload per post. In order to upload multiple files, you need to use the pattern documented in my blog posts. In future, we may extend the blobstore API to support more flexible upload URLs, supporting multiple uploaded files in a single request.
Edit: The blobstore now supports multiple file uploads in a single request.
Here's how I use the get_uploads() method for more than one file:
blob_info = self.get_uploads()[0]
blob_info2 = self.get_uploads()[1]
Nick Johnson's dropbox service is another example and I hope you find what suits your needs.