How can I access the generated thumbnail in S3 in frontend? - reactjs

I am created a lambda trigger , when a video file uploaded in a s3 input bucket, it will create a thumbnail in output bucket, but I don't know how to access it. Please help me.
Iam generating 3 thumbnail from a single video, in this bottom image 👇, there this 4 video.
But I have the name of the file as dfdf and treasersonthing and
vijay.treaser.ve and vollyball , but I want all 3 images using this
file name.

The question is quite open - do you want to access the generated thumbnails on the frontend of your website? If so, I will try and provide some ideas for the architecture, based on some assumptions.
Publicly Accessible Thumbnails
Assuming you want to make the thumbnails publicly accessible, S3 can expose them through its own HTTP endpoint. See AWS documentation. Please note that this involves enabling Public Access on your bucket, which can potentially be risky. See How can I secure files in my Amazon S3 bucket for more information. While this is an option, I'm not going to elaborate on it, as it's probably not the best.
The preferred way is to have a CloudFront distribution serve files from your S3 bucket. This has the advantages of a typical CDN - you can have edge locations caching your files across the globe, thus reducing the latencies your customers see. See this official guide on how to proceed with this CloudFront + S3 solution.
Restricted-access Thumbnails
If your thumbnails are not meant to be public, then you can consider two options:
implement your own service (hosted on any compute engine you prefer) to handle the authentication & authorization, then return the files to your customers, or
use the CloudFront + S3 solution and control the authentication and authorization with Lambda#Edge. See AWS docs.

Related

How can I upload files to my S3 bucket from my React app without a backend?

I am creating a web app using React where the user can upload files to a folder in my S3 bucket. This folder will have a unique passcode name. The user (or someone else) can use this passcode to retrieve these files from the S3 folder. So basically there is no login/authentication system.
My current issue is how do I safely allow read/write access to my S3 bucket? Almost every tutorial stores the access keys to the client code which I read is very bad practice but I also don't want to create a backend for something this simple. Someone suggested presigned URLs but I have no idea how to set that up (do I use Lambda? IAMs?). I'm really new to AWS (and webdev in general). Does anyone have any pointers on what I could look into?
do I use Lambda? IAMs?
The setup and process if fully explained in AWS blog:
Uploading to Amazon S3 directly from a web or mobile application

Securing S3 bucket for users?

I have developed a react native app, with AWS Amplify to support the backend (DynamoDB, S3). All users of the app have to use Auth.signIn() to sign in and are part of a user pool.
Once in, they can start to upload videos to S3 via the app or view videos in the app that are in the S3 bucket that is PUBLIC.
I use the path to the S3 video (https://myS3bucket....) as the source URL of the video. However the videos are only visible in my app when the bucket is public. Any other setting (protected/private) and no video is visible. How can i make this more secure?
S3 Buckets have 3 methods of managing security:
IAM: Any user or role within the same AWS account as the bucket can be granted permissions to interact with the S3 Bucket and its objects.
S3 Bucket Policies: Grant bucket wide (or prefix) access to S3 buckets.
S3 ACLs - Per object level permissions.
Its generally advised against using S3 ACLs these days as their functionality was improved via S3 bucket policies. Only use them if you need a specific object to have a different set of permissions.
I sugeest not to make files or the bucket public if you want authenticated users to upload and/or download files. For this, use S3 signed URLs to give users access to files. In other words, the backend will authenticate users accordingly, generate them signed URLs and then the react native app will interpret that URL accordingly, ie a video file.
You will need to change a few things but this guide should cover that
I have recently published an article which describes in detail the security best practices, which help address the following points:
How to secure an S3 buckets, which store sensitive user data and the application code.
How to securely configure a CloudFront distribution.
How to protect frontend apps against common OWASP threats with CloudFront Functions.
To learn more have a look at the article.
Best,
Stefan

AngularJS and Ruby on Rails - Uploading multiple files directly to Amazon S3

I'm writing an app where users can write Notes, and each note can have many files attached to it.
I would like users to be able to click 'Browse', select multiple files, which will be uploaded when the user clicks 'Save Note'.
I want these files to be uploaded directly into Amazon S3 (or some other cloud storage solution?) without going through my server, so I don't have to worry about uploads blocking my server.
What is the best way to accomplish this?
I have seen many examples to upload directly into Amazon S3, but none of them support multiple files. Will I somehow have to do this all in Javascript by looping through a collection of files selected with the Browse button?
Thanks!
Technically, your javascript residing in the browser could make HTTP RESTful calls to AWS and store data in S3, but then you would be exposing the security credentials to connect to AWS in the script.. not good.
I guess the only way is to process it thru a web-server which can securely access AWS and store the notes.. or, you could just write those notes to a local disk (where the webserver sits), and schedule tools like s3cmd to automatically synch them with S3 buckets.

Application hosted on AppHarbor, how can I save uploaded images?

AppHarbor (like Heroku) doesn't allow you to save uploaded files or images. I need to offload this somewhere and I have no idea what services exist for this purpose.
I've looked into FilePicker.io but they display a tacky branding image in their uploader and to remove that branding you have to pay a large sum of money.
Any suggestions on how to approach this problem? What is the modus operandi with applications that need file uploads that are hosted on PaaS?
We recommend that you use Amazon S3 to store files like this. An AppHarbor user has written a guide on how to get started with S3 on the support forums.

Where to put shared files on amazon AWS design for failure architecture?

I have a simple LAMP website on AWS. I want at least 2 instances, that will connect to RDS, and I'll have a load balancer.
The simple question, not answered on the web is: Where do you put the shared user uploaded files like the images?
NFS should be one answer. I mean,something like create another instance, sharing a folder through NFS and the other instances mount them. But it is a failure point.
Rsync folders between instances ¿?!!
The best answer I found is to use s3fs, and mount an s3 bucket on all my instances. But I don't like to use things not supported by Amazon.
What is the right solution?
Thanks!
SOLUTION
I tried s3fs but I experienced many issues (ie with Eventual Consistency).
So, finally I implemented the AWS SDK and used the S3 library to upload directly to S3.
Then pointed a domain to the s3 bucket directly.
Thanks to all
Why not put the files in S3? You don't need to use something like s3fs - just use the regular s3 apis to upload the files to s3.
Users can also upload the files straight to s3 without going via your server if use the post api
Like Frederick said, S3 is probably best, because the only other option is to put them on the load balancer and return them right away. But then if you do multiple load balancers with some kind of Round Robin DNS or something, you are up the creek, and have to copy them all over hell, and it just wont work. And if you are using the ELB, then you can't do that anyway.
As you say, you already are looking at s3fs. You are right - why have an extra layer of software.
Likely better is to generate time limited, signed urls - or if the files are truly public - just make that part of the bucket public, and directly use an s3 url to serve the files. You can also point a DNS at S3, so http://my-bucket.s3.amazonaws.com/path/to/file.png becomes files.mycompany.com/path/tofile.png.

Resources