I have a developed a Reactjs front end, FastAPI backend and MongoDb database. I want to upload images to a private Azure storage account and consume them on the site.
What I'm struggling to understand is what the best practice is for loading the images?
Should I:
Get a SAS key from my API, then go directly from Reactjs to the storage account URL?
Get the image from the Azure storage using the API and serve it to the Reactjs
A better option?
Thanks in advance
Preferred way would be to go with option 1 - Get a SAS key from my API, then go directly from Reactjs to the storage account URL.
The advantages of this approach is that the Browser is directly requesting the images from Azure Storage so that there is less load on your API to fetch and serve the images. You may need to configure CORS settings on your Storage account if your React app is making an Ajax request to fetch the blob contents from Storage.
Doing this way however exposes your SAS URL to the client. If you do not want the clients to know from where the images are being served, then go with option 2.
Related
I would like to upload videos directly from ReactJS component to Azure media service without middle server.
I investigated and found this article and this #azure/arm-mediaservices SDK. and it seems that secret tokens are involved and I assume it's not a good idea to use in on the client side.
Can someone share thoughts and examples how to simple upload a video directly from client side (ReactJS Component) to azure media service with a temporary token ?
I found a way to upload Images to Blob in the way i want to, but I didn't find a way to do the same for videos to media services (without middle server side operations).
You are correct that we do not recommend ever storing your account secrets on the client side. You will still want a mid tier for that. One simple way to do that securely is with Azure Functions.
All you need to do is creat the Asset and then get a SAS Url to the container to upload content into it. Once you have those that there should be some samples out there if uploading to a SAS Url in Azure storage.
I am creating a web app where there is an option for authenticated users to upload pictures. I am confused as to whether its better to do it on the front end or backend. I have already implemented it on the front end but I had to include my "accessKeyId" and "secretKey". I don't know if this compromises my security. I am using cloud functions for my back end. If anyone can help me with best practices in relation to this I will be very grateful.
You can generate pre-signed urls from your backend, then your frontend can upload files safely directly into S3 without exposing your credentials.
Take a look into the documentation here.
Also, this article points out some of the advantages of that strategy:
You can still allow only authenticated users to get access to presigned urls
You can set expiration time for the generated presigned urls
You save bandwidth, memory processing and upload time by avoiding your files to pass through your backend function
I have an app using MERN stack. What i want to do is to allow the user to upload images from their local machine, save that image, and display it on the website. What would be the recommended way to do this? It seems as if GridFS and MongoDB is an option but from what I am reading, storing images files directly in a database isn't recommended
Storing images directly to DB is not recommended. Instead, you can use some third party libraries like multer that store images in the folder and generate a path for you to access that image which you can save in db.
Another approach is to use third-party services like Cloudinary or S3 bucket to save images and they will in return give you the URLs and you can save those Urls in DB.
I have a REST API that supports a multi-user React App that I have been running on an EC2 instance for a while and I am moving into uploading photos/pdfs that will be attached to specific users. I have been able to accomplish this in testing using EFS which I had provisioned to the same EC2 instance I had the API on but I am looking to move towards S3.
When I was testing it out using EFS, I would send everything through the REST API, the user would do a POST and then the API would store the file in EFS along with the metadata for where the file was stored in my DB, then in order to retrieve the data the user would do a GET to the REST API and the server would fetch the data from EFS based on metadata in the DB.
I am wondering what is the usual use case for S3? Do I still have to send everything through my REST API if I want to be sure that users only have access to the pdfs/images that they are supposed to or is there a way for me to ensure their identity and directly request the resources from S3 on the front-end and just allow my API to return a list of S3 urls for the files?
The particular use case I have in mind is making it so users are able to upload profile pictures and then when a user searches for another user by name, all of the profile pictures of the users returned in the query of users are able to be displayed in the list of results.
As far as I know, there is no "normal" way to deal with this particular situation - either could make sense depending on your needs.
Here are some options:
Option 1
It's possible to safely allow users to access resources directly from S3 by using AWS STS to generate temporary credentials that your users can utilise to access the S3 APIs.
Option 2
If your happy for the pics to be public, you could configure a bucket as a static website and simply use those public URLs in your web application.
Option 3
Use Cloudfront to serve private content from your S3 buckets.
I have a form where a user submits an image from the client-side. Currently, I am using AWS amplify to upload images to s3. I have not faced an issue so far because most of the times users selected the images less than 5MB. But right now I need to expand this limit and also optimizes the performance as well because the size limit is increased. When I have researched into performance, I got to know about the transfer acceleration and its compatibility with AWS SDK but not AWS amplify. So, I had to move my upload logic to SDK. The architecture I am following in the backend is Serverless and Lambda's. Let's say the user submits his form, the request cycle contains the image of size 10MB. But lambda will not process this request because the lambda request-response cycle is limited to 6MB transactions. How can I improve the performance, along with increasing the size limit? Also, can it be achieved with the AWS amplify itself?
Your use case is best served with S3 pre-signed urls. In this case your lambda function generates a url using the aws-sdk for a s3 upload and give back only the url for your frontend. Then you can upload directly to this url from the frontend without AWS Lambda being invoked again. All the data transfer will occur between your frontend and S3.
You can find more details about this solution here.