I am trying to download the images stored in a aws s3 folder inside a bucket and display the images in my frontend. Problem is I am able to download 1 image at a time. I want to download all the images at one go and then display in my react UI.I am using Springboot in my backend. Below is my code.
public byte[] downloadUserProfileImage(int userProfileId) {
String path = String.format("%s/%s",
BucketName.PROFILE_IMAGE.getBucketName(),
userProfileId);
String filename = "profile_image.jpg";
return fileStore.download(path, filename);
}
I have not used Springboot with aws. But in python I have done this so in Java/springboot the syntax will only change.
You need to loop through all the files in S3, get the keys for these files and there is a function of s3.download_file(...).
To loop through the files use a Paginator - Check the documentation.
Related
I am trying to use an HTTP get to iterate through a folder in ADLS. The folder "
TemplatesUpdated" has a bunch of subfolders that have a few files in each subfolder. I want to iterate through each subfolder and then copy each file to a new location. This is what I have so far, but I am not sure what to put in the Body to get each subfolder and each item within the subfolders.
Achieving your requirement using HTTP requires methods in order to iterate or loop through each folder. There are 2 ways of achieving your requirement.
WAY - 1: USING LOGIC APPS WITH AZURE BLOB STORAGE CONNECTOR
You will require 2 List Blob actions so that one will get the subfolders in TemplatesUpdated and the other List Blob retrieves files n the subfolder. below is the flow of my logic app
RESULT:
WAY - 2: USING SDK
from azure.storage.blob import BlockBlobService
ACCOUNT_NAME = "<STORAGE ACCOUNT NAME>"
SAS_TOKEN='<SAS TOKEN>'
blob_service = BlockBlobService(account_name=ACCOUNT_NAME,account_key=None,sas_token=SAS_TOKEN)
containers = blob_service.list_containers()
for c in containers:
generator = blob_service.list_blobs(c.name)
for blob in generator:
print("\t Blob name: "+c.name+'/'+ blob.name)
RESULTS:
I need to display/stream large video files in reactjs. These files are being uploaded to private s3 bucket by user using react form and flask.
I tried getObject method, but my file size is too large. get a signed url method required me to download the file.
I am new to AWS-python-react setup. What is the best/most efficient/least costly approach to display large video files in react?
AWS offers other streaming specific services but if you really want to get them off S3 you could retrieve the files using torrent which, with the right client/videoplayer would allow you to start playing them without having to download the whole file.
Since you mentioned you're using Python, you could do this using AWS SDK like so:
import boto3
s3 = boto3.client('s3')
response = client.get_object_torrent(
Bucket='my_bucket',
Key='/some_prefix/my_video.mp4'
)
The response object will have this format:
{
'Body': StreamingBody()
}
Full docs here.
Then you could use something like webtorrent to stream it on the frontend.
Two things to note about this approach (quoting docs):
Amazon S3 does not support the BitTorrent protocol in AWS Regions launched after May 30, 2016.
You can only get a torrent file for objects that are less than 5 GBs in size.
I am trying to learn React and Firebase right now. I would like to
Download an image from my Google cloud storage
Display that image on my web page
I am following this guide here to download files: https://firebase.google.com/docs/storage/web/download-files
However it seems out of date. I followed the advice on this other stack overflow thread to change the package import. google-cloud TypeError: gcs.bucket is not a function.
So right now I am able to download the file, however I do not know how to access it. From what I understand it would be in memory, but how would I display it? The docs to download a file are here https://googleapis.dev/nodejs/storage/latest/File.html#download.
This is currently what I have
const { Storage } = require('#google-cloud/storage');
function MyPage(props: any) {
const storage = new Storage({
projectId: 'myProjectId',
});
const myImage = await storage
.bucket('myBucket')
.file('myImage.jpg')
.download();
});
return (
<div id="MyPageContainer">
<h1>Hello!</h1>
</div>
);
}
Instead of doing the download of files from Cloud Storage to your web server you should provide a link in your HTML so that the users can download files directly from Cloud Storage, as mentioned by #JohnHanley in the comments.
This will take off your hands the processing of the file through your app's back-end to Cloud Storage itself, which is more efficient, but there are performance and cost factors for you to consider implementing it. If you are looking to deliver secure files directly from your web server, you can replace that for Signed URLs, you can check the documentation for it in here.
If you still choose to go with the processing through your we server, you can take a look at this example and once you download the file, you will then need to create an HTML tag + location so the browser downloads from your server.
I am making an application in which the users can upload some pictures so that others can see them. Since some of these can be a bit large, I need to generate smaller images to give a preview of the content.
I already have the uploaded images in GCS, in urls with the form: "https://storage.googleapis.com/...", but from what I can see in the Images API docs, it uses the blobstore, which I am not using (it's been superseded). How can I serve the thumbnails from the gcs link to avoid making the users load the full image? I would really appreciate any code example.
UPDATE:
I tried to copy the example with an image from my app using images.Image with filename as suggested, but it gives me a TransformationError, and a NotImageError if I don't try any transformations:
def get(self):
teststr ='/gs/staging.trn-test2.appspot.com/TestContainer/Barcos-2017-02-12-145657.jpg'
img = images.Image(filename=teststr)
img.resize(width=80, height=100)
thumbnail = img.execute_transforms(output_encoding=images.JPEG)
self.response.headers['Content-Type'] = 'image/jpeg'
self.response.out.write(thumbnail)
What am I missing?
In general you can use the Blobstore API, but with GCS as underlying storage instead of the Blobstore, see Using the Blobstore API with Google Cloud Storage. IMHO just the storage is superseded, not the API itself:
Note: You should consider using Google Cloud Storage rather than Blobstore for storing blob data.
For the Image class in particular (from your link) you can use the filename optional constructor argument instead of the blob_key one (which triggers the above-mentioned blobstore API + GCS usage under the hood):
filename: String of the the file name of a Google Storage file that
contains the image data. Must be in the format
`/gs/bucket_name/object_name`.
From its __init__() function:
if filename:
self._blob_key = blobstore.create_gs_key(filename)
else:
self._blob_key = _extract_blob_key(blob_key)
This problem has stumped me for most part of my day.
BACKGROUND
I am attempting to read the Play Store reviews for my Apps via my own Google App Engine Java project.
Now I am able to get the list of all the files using Google Cloud Storage client api (java).
I can also read the meta for each of the csv files in that bucket and print it to the logs:
PROBLEM
I simply can't find a way to read the actual object and get the csv data.
My java code snippet:
BUCKET_NAME = "pubsite_prod_rev_*******";
objectFileName = "reviews/reviews_*****_***.csv"
Storage.Objects.Get obj = client.objects().get(BUCKET_NAME, objectFileName);
InputStream is = obj.executeMediaAsInputStream();
Now when I print this inputstream, it tells me its GZIPInputStream (java.util.zip.GZIPInputStream#f0be2c). Converting this inputstream to byte[] or String (desired) does not work.
And if I try to envelope it inside GZIPInputStream object using:
zis = new GZIPInputStream(is);
it throws ZipException : Not in GZIP format.
Metadata of the file:
"contentType": "text/csv; charset=utf-16le",
"contentEncoding": "gzip",
What wrong am I doing?
Sub Question: In the past I have successfully read text data from Google Cloud Storage using GcsService, but it does not seem to work with the Buckets which have the Play Store review csv files. Does anybody know if my Google App Engine project (connected to same Google developer account) can read these Buckets?
Solved it using executeMedia() and parseAsString
HttpResponse response = obj.executeMedia();
response.parseAsString(); //works!!