I am trying to learn React and Firebase right now. I would like to
Download an image from my Google cloud storage
Display that image on my web page
I am following this guide here to download files: https://firebase.google.com/docs/storage/web/download-files
However it seems out of date. I followed the advice on this other stack overflow thread to change the package import. google-cloud TypeError: gcs.bucket is not a function.
So right now I am able to download the file, however I do not know how to access it. From what I understand it would be in memory, but how would I display it? The docs to download a file are here https://googleapis.dev/nodejs/storage/latest/File.html#download.
This is currently what I have
const { Storage } = require('#google-cloud/storage');
function MyPage(props: any) {
const storage = new Storage({
projectId: 'myProjectId',
});
const myImage = await storage
.bucket('myBucket')
.file('myImage.jpg')
.download();
});
return (
<div id="MyPageContainer">
<h1>Hello!</h1>
</div>
);
}
Instead of doing the download of files from Cloud Storage to your web server you should provide a link in your HTML so that the users can download files directly from Cloud Storage, as mentioned by #JohnHanley in the comments.
This will take off your hands the processing of the file through your app's back-end to Cloud Storage itself, which is more efficient, but there are performance and cost factors for you to consider implementing it. If you are looking to deliver secure files directly from your web server, you can replace that for Signed URLs, you can check the documentation for it in here.
If you still choose to go with the processing through your we server, you can take a look at this example and once you download the file, you will then need to create an HTML tag + location so the browser downloads from your server.
Related
I need to display/stream large video files in reactjs. These files are being uploaded to private s3 bucket by user using react form and flask.
I tried getObject method, but my file size is too large. get a signed url method required me to download the file.
I am new to AWS-python-react setup. What is the best/most efficient/least costly approach to display large video files in react?
AWS offers other streaming specific services but if you really want to get them off S3 you could retrieve the files using torrent which, with the right client/videoplayer would allow you to start playing them without having to download the whole file.
Since you mentioned you're using Python, you could do this using AWS SDK like so:
import boto3
s3 = boto3.client('s3')
response = client.get_object_torrent(
Bucket='my_bucket',
Key='/some_prefix/my_video.mp4'
)
The response object will have this format:
{
'Body': StreamingBody()
}
Full docs here.
Then you could use something like webtorrent to stream it on the frontend.
Two things to note about this approach (quoting docs):
Amazon S3 does not support the BitTorrent protocol in AWS Regions launched after May 30, 2016.
You can only get a torrent file for objects that are less than 5 GBs in size.
I have a React application with a component like this:
export class UploadComponent extends React.Component {
getFileInput(this: any) {
this.refs.fileUploader.click();
}
onChangeFile(event: any) {
event.stopPropagation();
event.preventDefault();
var file = event.target.files[0];
// What do I do here to replace a photo in my assets folder?
}
render() {
return (
<div>
<input
onChange={this.onChangeFile.bind(this)}
ref="fileUploader"
type="file"
/>
<input
type="button"
value="Upload photo"
onClick={this.getFileInput.bind(this)}
/>
</div>
);
}
}
In my src folder, I have a folder called assets with a file titled something like this: photo.png. Throughout my application, I reference photo.png by name. I want to allow the user to upload a photo of their choice which will replace photo.png with the newly-uploaded photo and rename it to photo.png as well so that the photo will be replaced everywhere.
I don't necessarily want to have to store this into a database because that'll require getting it from the database. I was wondering if there is a solution that will just do it entirely in React. Thank you for your help!
To perform operations like PUT, POST etc you need some processing to happen on the "server/cloud" to accept the request and then to perform the action, in this case saving the file.
With that out of the way, there are multiple ways to approach the problem that you have described:
(Please note that these solutions assume that you do not want to deal
with any servers like EC2. So I'm assuming that you will be using S3
to serve the react app. This is a cheap and relatively easy way to host a static app.)
CloudFront: You can configure cloudfront to act as a proxy for S3. Then you can simply issue a PUT request to the same prefix and upload the file. (link 1, link 2)
API Gateway: You can simply create a distribution wherein a endpoint accepts a request, routes the request to a Lambda
function wherein you can perform operations like resizing,
optimization etc and then save the file to S3 or where it is that
you are hosting the app. Note the there is a limit on the payload
size for API Gateway which is 10MB. (link)
Amplify - Client Side: With this you can configure Amplify to have write access to the S3 bucket and use the storage module. (link)
The above options are in the increasing order of complexity in terms of setting them up and making sure the system is secure.
I reference photo.png by name. I want to allow the user to upload a
photo of their choice which will replace photo.png with the
newly-uploaded photo and rename it to photo.png as well so that the
photo will be replaced everywhere.
I don't necessarily want to have to store this into a database because
that'll require getting it from the database.
If you are deploying serverless app you can use storage service like AWS S3 or Azure Blob Storage.
With server app you could also take the option above or use filesystem api if it is supported and you have permission to it and it is persistent.
I want to produce a Google Apps document based on a (Google doc) template stored on the users Google Drive and some XML data held by a servlet running on Google App Engine.
Preferably I want to run as much as possible on the GAE. Is it possible to run Apps Service APIs on GAE or download/manipulate Google doc on GAE? I have not been able to find anything suitable
One alternative is obviously to implement the merge functionality using an Apps Script transferring the XML as parameters and initiate the script through http from GAE, but it just seem somewhat awkward in comparison.
EDIT:
Specifically I am looking for the replaceText script functionality, as shown in the Apps script snippet below, to be implemented in GAE. Remaining code is supported through Drive/Mail API, I guess..
// Get document template, copy it as a new temp doc, and save the Doc’s id
var copyId = DocsList.getFileById(providedTemplateId)
.makeCopy('My-title')
.getId();
var copyDoc = DocumentApp.openById(copyId);
var copyBody = copyDoc.getActiveSection();
// Replace place holder keys,
copyBody.replaceText("CustomerAddressee", fullName);
var todaysDate = Utilities.formatDate(new Date(), "GMT+2", "dd/MM-yyyy");
copyBody.replaceText("DateToday", todaysDate);
// Save and close the temporary document
copyDoc.saveAndClose();
// Convert temporary document to PDF by using the getAs blob conversion
var pdf = DocsList.getFileById(copyId).getAs("application/pdf");
// Attach PDF and send the email
MailApp.sendEmail({
to: email_address,
subject: "Proposal",
htmlBody: "Hi,<br><br>Here is my file :)<br>Enjoy!<br><br>Regards Tony",
attachments: pdf});
As you already found out, apps script is currently the only one that can access an api to modify google docs. All other ways cannot do it unless you export to another format (like pdf or .doc) then use libraries that can modify those, then reupload the new file asking to convert to a google doc native format, which in some cases would loose some format/comments/named ranges and other google doc features. So like you said, if you must use the google docs api you must call apps script (as a content service). Also note that the sample apps script code you show is old and uses the deptecated docsList so you need to port it to the Drive api.
Apps script pretty much piggy backs on top of the standard published Google APIs. Increasingly the behaviours are becoming more familiar.
Obviously apps script is js based and gae not. All the APIs apart from those related to script running are available in the standard gae client runtimes.
No code to check here so I'm afraid generic answer is all I have.
I see now it can be solved by using the Google Drive API to export (download) the Google Apps Doc file as PDF (or other formats) to GAE, and do simple replace-text editing using e.g. the iText library
The appengine/image package works fine with images stored in Blobstore. However, what would be a good approach to resize images stored in Google Cloud Storage?
You can use the same Image Service with the Google Cloud Storage, especially if you use Blobstore API for uploading images.
A sample code in Java:
String fullName = "/gs/" + bucketName + "/" + objectName;
Image picture = ImagesServiceFactory.makeImageFromFilename(fullName);
Transform resize = ImagesServiceFactory.makeResize(maxWidth, maxHeight);
picture = imagesService.applyTransform(resize, picture);
In Go, you can use BlobKeyForFile function:
BlobKeyForFile returns a BlobKey for a Google Storage file. The
filename should be of the form "/gs/bucket_name/object_name".
Image cropping functionality is fairly easy to implement these days. You can then do whatever you want with the image - store it back to Google Storage or return it immediately to the client.
What's more, you can easily deploy that functionality to any cloud-based serverless solution.I was using Cloud Run because it's Docker-based and hence can be ported anywhere potentially.
I have a service that we use for image cropping based on nodejs/sharp and deployed into Google Cloud Run. You can use it as-is. There's nothing project-specific hardcoded in it.
I have the Google picker set up, as well as Blobstore. I'm able to upload files from my local machine to the Blobstore, but now I have the Picker set up, it works, but I don't know know how to use the info (url? fileid?) to then load that selected file into the Blobstore? Any tips on how to do this? I haven't been able to find much of anything on it on Googles resources
There isn't a direct link between the Google Picker and the App Engine Blobstore. They are kind of different tools for different jobs. The Google Picker is designed as an end user tool, to select data from a users Google account. It just so happens that the Picker also provides an upload interface (to Google Drive) as well. The Blobstore on the other hand, is designed as a blob storage mechanism for your App Engine application.
In theory, you could write a script to connect the two, but there are a few considerations:
Your app would need access to the users Google Drive account using OAuth2. This is necessary, as the Picker API is a client side API, whereas the Blobstore API is a server side API. You would need to send the selected document URL to the server, then download the document and finally save it to Blobstore.
Unless you then deleted the data from Drive (very risky due to point 3), your data would be persisted in 2 places
You cannot know for sure if the user selected an existing file, or uploaded a new one
Not a great user experience - the user things they are uploading to Drive
In essence, this sounds like a bad idea! What is your use case?
#Gwyn - I don't have enough reputation to add a comment to your solution, but I had an idea about problem #3: You cannot know for sure if the user selected an existing file, or uploaded a new one
Would it be possible to use Response.VIEW to see what view they were using when the file was selected? If you have one view constructor for Drive files and one for Upload files, something like
var driveView = new google.picker.View(google.picker.ViewId.DOCS);
var uploadView = new google.picker.DocsUploadView();
would that allow you to know whether the file was a new upload (safe to delete) or an existing file (leave it alone)?
Assuming that you want to pick a file from your own Google Drive and move it to the Blobstore.
1)First you have to perform Oauth for Google Drive API
2)Using the picker when you select a file from drive, you need to get it's id
3)Using the id obtained in step 2 you can programmatically download it using Drive API
4)After downloading the file you can use FileService(deprecated though) to upload the file to the
Blobstore.