Upload Image from Post Request to Cloud function - request

Is there any way that the image that is uploaded by the user be directly sent to cloud function without converting to string in Python?
Thank you in advance

Instead of sending the image to the Cloud Function, it's better if the image has been uploaded to GCS. This way, you can just provide the gsUri and the Cloud Function can handle the image file by using the cloud storage client library. For example, the OCR Tutorial follows this approach.
In this case, if a function fails then the image is preserved in Cloud Storage. By using a background function when the image is uploaded, you can follow the strategy for retrying background functions to have an "at-least-once" delivery.
If you want to use HTTP Functions instead, then the only recommended way to provide the image is by converting it to string and send it inline with the POST. Note that for production environments is way better to either have the file within the same platform or send it inline.

Related

Sanity Check: Is it possible to proxy between an HLS(m3u8) video stream and an angularjs app (ui)?

I need to create a Spring Boot WebFlux rest web service to act as a proxy between an angularjs app that shows a video stream and an endpoint at dacast.com that delivers m3u8 playlist-based content.
At this time, there is a video component in the angular app that takes the following uri and presents the content to the user. I plan to create a reactive webflux rest service, but am at a loss as to how to implement this proxy. There are a lot of posts online about viewing the HLS feed in HTML, but nothing about how to proxy between the stream and a consumer of it.
https://dcunilive11-lh.akamaihd.net/i/dlive_1#xxxxxx/master.m3u8
I believe that I need to download the master.m3u8 file, which will contain https endpoints that I can download as a Flux stream and pass along to the angular app. Does this make sense? I'd appreciate your help and tips...
Thanks,
Mike
The m3u8 file is a text file which contains some info about the video and links to the media streams as you say.
The simplest way for the angular app to play the video would be just to provide the link to the original m3u8 file to it directly, but I am guessing it can't reach that link for some reason in your use case.
Assuming this is correct, it sounds like your web service just needs to act as a proxy for the m3u8 file link and the media streams.
There are some instructions in the online Spring documentation for this - e.g.: https://cloud.spring.io/spring-cloud-gateway/1.0.x/multi/multi__building_a_gateway_using_spring_mvc.html
One thing that may be causing some confusion is that the HLS media streams are actually transferred between the client and the server as a series of client requests and server responses, i.e. similar to regular HTTP request/responses. They are not constantly streaming, i.e. something that that you might use a websocket to read.

S3 multipart upload with React JS

I am trying to upload Image/Video files into S3 bucket from my React JS application. So I refered some of the React S3 uploader npm packages react-dropzone-s3-uploader , react-s3-uploader-multipart. But both are keep giving Errors while importing into React JS component. And I have already post this error message on my another stack question (please refer this qus). I would like to do this multipart upload directly from my React application to S3 bucket. If anyone knows the solution please share with me.
Thanks in advance.
The only lib which worked perfectly and supported AWS S3 multipart with minimum work was Uppy for me. Highly recommended to try out:
https://uppy.io/docs/aws-s3-multipart/
you will need to provide couple endpoints for it though, so read the docs. You will see "Companion" mentioned there, you can easily ignore it, provide 5 needed endpoints of your custom API and it will be all good. I would suggest to run the UI part, puth in some dummy URLs for these 5 functions and check network activity of the browser to faster understand how it works.
A function that calls the S3 Multipart API to create a new upload
A function that calls the S3 Multipart API to list the parts of a file that have already been uploaded
A function that generates a batch of signed URLs for the specified part numbers
A function that calls the S3 Multipart API to abort a Multipart upload, and removes all parts that have been uploaded so far
A function that calls the S3 Multipart API to complete a Multipart upload, combining all parts into a single object in the S3 bucket
Yet no matter what way you would build multipart upload, you will always need to start the upload, list parts, get signed URLs to upload each part, cancel the upload & complete. So it will never be 3 minutes task to build this, but with Uppy i had most of success.
You can use React Dropzone Uploader, which gives you file previews (including image thumbnails) out of the box, and also handles uploads for you.
Uploads have progress indicators, and they can be cancelled or restarted. The UI is fully customizable.
Here's an example of how to upload files directly to an S3 bucket, using pre-signed URLs.
Full disclosure: I wrote this library.
Here’s a way to do it full stack MERN with express file upload. Server code here is minimal. This might be helpful, if not, no worries!
https://link.medium.com/U1SdsoHMy2

Limit upload size for appengine interface to cloud store

Consider an image (avatar) uploader to Google Cloud Storage which will start from the user's web browser, and then pass through a Go appengine instance which will handle standard compression/cropping etc. and then set the resulting image as an object in Cloud Storage
How can I ensure that the appengine instance isn't overloaded by too much or bad data? In other words, I think I'm asking two questions (or possibly not):
How can I limit the amount of data allowed to be sent to an appengine instance in a single request, or is there already a default safe limit?
How can I validate the data to make sure it's proper jpg/png/gif before attempting to process it with standard go image libraries?
All App Engine requests are limited to 32MB.
You can check the size of the file being uploaded before the upload starts.
You can verify the file's mime-type and only allow correct files to be uploaded.

Download large file on Google App Engine Python

On my appspot website, I use a third party API to query a large amount of data. The user then downloads the data in CSV. I know how to generate a csv and download it. The problem is that because the file is huge, I get the DeadlineExceededError.
I have tried tried increasing the fetch deadline to 60 (urlfetch.set_default_fetch_deadline(60)). It doesn't seem reasonable to increase it any further.
What is the appropriate way to tackle this problem on Google App Engine? Is this something where I have to use Task Queue?
Thanks.
DeadlineExceededError means that your incoming request took longer than 60 secs, not your UrlFetch call.
Deploy the code to generate the CSV file into a different module that you setup with basic or manual scaling. The URL to download your CSV will become http://module.domain.com
Requests can run indefinitely on modules with basic or manual scaling.
Alternately, consider creating a file dynamically in Google Cloud Storage (GCS) with your CSV content. At that point, the file resides in GCS and you have the ability to generate a URL from which they can download the file directly. There are also other options for different auth methods.
You can see documentation on doing this at
https://cloud.google.com/appengine/docs/python/googlecloudstorageclient/
and
https://cloud.google.com/appengine/docs/python/googlecloudstorageclient/functions
Important note: do not use the Files API (which was a common way of dynamically create files in blobstore/gcs) as it has been depracated. Use the above referenced Google Cloud Storage Client API instead.
Of course, you can delete the generated files after they've been successfully downloaded and/or you could run a cron job to expire links/files after a certain time period.
Depending on your specific use case, this might be a more effective path.

Browser based file upload to AWS S3 and encode server-client workflow

Im writing a single-page-web-app (angularJs) and a server back-end (node.js). The communication between them is done via REST.
Currently im trying to implement the following scenario:
Upload big files from browser to S3 public bucket.
Copy uploaded file to private bucket on S3
Transcode uploaded file to HTML 5 compatible format (AWS Elastic Transcoder)
Store Meta-Object about the file in DB to access later
I'm racking my brains to get a well working design of the communication/ data-workflow between server and client, but always got stuck at the following questions?
Store file meta-object at the end or at the beginning of the process. If it is at the beginning, i have to store and handle some state information?
Who should start copying uploaded files to private bucket. Server or client? If it is the server, how can the client get informed about the job succeeded?
Who starts the transcoding process? If it is the server, how can the client get informed about the job succeeded?
How would you do this?
there is a pretty good tutorial which describes the use case you are planning to implement: http://www.bitcodin.com/blog/2015/02/create-mpeg-dash-hls-content-for-amazon-s3-and-cloudfront/
If your transcoding system has a RESTfull API (like bitcodin which is used in this tutorial, or any other service) you can do your application also client-side and use the API calls to get the state of your transcodings, etc. However, using the API you can do the same also server-side, whatever fits better for you.
I personally would store the metadata infos at the beginning of the process, as this is the point of time where you generate the "asset" in your database/CMS/etc.

Resources