Sanity Check: Is it possible to proxy between an HLS(m3u8) video stream and an angularjs app (ui)? - angularjs

I need to create a Spring Boot WebFlux rest web service to act as a proxy between an angularjs app that shows a video stream and an endpoint at dacast.com that delivers m3u8 playlist-based content.
At this time, there is a video component in the angular app that takes the following uri and presents the content to the user. I plan to create a reactive webflux rest service, but am at a loss as to how to implement this proxy. There are a lot of posts online about viewing the HLS feed in HTML, but nothing about how to proxy between the stream and a consumer of it.
https://dcunilive11-lh.akamaihd.net/i/dlive_1#xxxxxx/master.m3u8
I believe that I need to download the master.m3u8 file, which will contain https endpoints that I can download as a Flux stream and pass along to the angular app. Does this make sense? I'd appreciate your help and tips...
Thanks,
Mike

The m3u8 file is a text file which contains some info about the video and links to the media streams as you say.
The simplest way for the angular app to play the video would be just to provide the link to the original m3u8 file to it directly, but I am guessing it can't reach that link for some reason in your use case.
Assuming this is correct, it sounds like your web service just needs to act as a proxy for the m3u8 file link and the media streams.
There are some instructions in the online Spring documentation for this - e.g.: https://cloud.spring.io/spring-cloud-gateway/1.0.x/multi/multi__building_a_gateway_using_spring_mvc.html
One thing that may be causing some confusion is that the HLS media streams are actually transferred between the client and the server as a series of client requests and server responses, i.e. similar to regular HTTP request/responses. They are not constantly streaming, i.e. something that that you might use a websocket to read.

Related

GraphQL/Apollo application with file download from server

I'm a little new to GraphQL and this question falls under "It cannot possibly be this hard. I have to be missing something."
I have a fairly standard GraphQL/Apollo/React application split into client and server. Everything is working well with the client making API calls and getting data back from the server. The client is even able to upload files to the server. However, I now need the server to stream back files saved on disk. That's it.
This is the "I have to be missing something" part. Everything I've seen in the docs and on Stackoverflow is some variation of pushing the file back from the server and through the GraphQL query as a base64-endocded string and then doing some very hacky stuff on the client, often involving a hidden href tag and a simulated click. To this I say, "What???"
Seriously. There are files on disk that the server knows how to find. The client needs to show a button to the user that they can click on to download the file. That's it. Every other framework in every other language has an easy way to do this. Can someone show me the incredibly simple thing that I'm missing here?
Thanks,
Alex
What you're missing is that GraphQL really shouldn't be used for this purpose.
While GraphQL itself does not specify a specific format for serializing responses, the de facto format is JSON. And the only way to get the file inside a JSON response is if it's serialized as a string.
If you want to serve static content, you should set up Nginx, Apache or another web server that's been built with that in mind. Alternatively, if you're already using some existing web server library like Express, it most likely has tools for serving static content as well.
Just because you have a GraphQL endpoint does not necessarily mean it should be the only way your client communicates with your backend.

Upload Image from Post Request to Cloud function

Is there any way that the image that is uploaded by the user be directly sent to cloud function without converting to string in Python?
Thank you in advance
Instead of sending the image to the Cloud Function, it's better if the image has been uploaded to GCS. This way, you can just provide the gsUri and the Cloud Function can handle the image file by using the cloud storage client library. For example, the OCR Tutorial follows this approach.
In this case, if a function fails then the image is preserved in Cloud Storage. By using a background function when the image is uploaded, you can follow the strategy for retrying background functions to have an "at-least-once" delivery.
If you want to use HTTP Functions instead, then the only recommended way to provide the image is by converting it to string and send it inline with the POST. Note that for production environments is way better to either have the file within the same platform or send it inline.

Browser based file upload to AWS S3 and encode server-client workflow

Im writing a single-page-web-app (angularJs) and a server back-end (node.js). The communication between them is done via REST.
Currently im trying to implement the following scenario:
Upload big files from browser to S3 public bucket.
Copy uploaded file to private bucket on S3
Transcode uploaded file to HTML 5 compatible format (AWS Elastic Transcoder)
Store Meta-Object about the file in DB to access later
I'm racking my brains to get a well working design of the communication/ data-workflow between server and client, but always got stuck at the following questions?
Store file meta-object at the end or at the beginning of the process. If it is at the beginning, i have to store and handle some state information?
Who should start copying uploaded files to private bucket. Server or client? If it is the server, how can the client get informed about the job succeeded?
Who starts the transcoding process? If it is the server, how can the client get informed about the job succeeded?
How would you do this?
there is a pretty good tutorial which describes the use case you are planning to implement: http://www.bitcodin.com/blog/2015/02/create-mpeg-dash-hls-content-for-amazon-s3-and-cloudfront/
If your transcoding system has a RESTfull API (like bitcodin which is used in this tutorial, or any other service) you can do your application also client-side and use the API calls to get the state of your transcodings, etc. However, using the API you can do the same also server-side, whatever fits better for you.
I personally would store the metadata infos at the beginning of the process, as this is the point of time where you generate the "asset" in your database/CMS/etc.

Share tweet with image from my web app

When user clicks "Share on Twitter" button on my site, I'd like to prepopulate that tweet with an image (let's assume that image is served from my server).
It would be great if I could do it with Twitter's web intent, but that's apparently not possible: https://twittercommunity.com/t/tweet-intent-with-image/18740
It seems like I could use Twitter's POST media/upload API, but in that case I would have to implement 3-legged oAuth authorization? It also seems that is not possible to do it directly from the client (due to CORS issues and I'd have to expose my app's secret key in JavaScript code).
So I guess for this to work I'd need to have some server as middleman between the client running my API and Twitter's oAuth provider?
Is there any service that you could recommend that takes care of it - I found about oAuth.io, I guess they act as a described middleman?
The third possible approach I found would be via Twitter Cards. Is it possible to make it work since I dynamically generate the content via AJAX calls?
This lit a beam of hope in me, but I'm not totally sure what it means yet: https://twittercommunity.com/t/crawler-ajax-escaped-fragment-support/16129
My actual situation: I'm developing an Angular app that displays Highcharts charts and I'd like my users to be able to share their screenshots.
My current high-level idea is: Highcharts' export feature sends request to their server to generate the image, it creates an image and serves it there for 30 seconds - and I'm given it's link in a callback on client.
Now I can store that image somewhere else (my or Twitter's server?) and then we come to the problem described above.
I'd be grateful on any advice how to do this in a most elegant way that would also be as frictionless as possible for the users. (e.g. oAuth requires that they authorize the app to post on their behalf)

Creating a channel for webRTC video chat

I've been following the HTML5rocks webRTC guide and I have the Javascript set up as described, however the guide is not clear on how to receive a channelToken, roomKey, and User ID. The guide says,
"Note that values used in the JavaScript, such as the room variable and
the token used by openChannel(), are provided by the Google App Engine
app itself: take a look at the index.html template in the repository
to see what values are added."
Unfortunately the link provided is no good and I'm left with very little information regarding the most essential step in this process. The guide isn't clear about whether or not the Google App Engine is a necessary component and I don't see why it should be. I have searched the web in an attempt to find a more useful source, but I was unsuccessful. I also took a look at the webRTC Demo(https://apprtc.appspot[dot]com), that too was no help seeing that the channel information is generated server side. I feel like I should just be able to make a simple http request to some Google server and then run from there. Any information regarding my problem would be much appreciated.
Apologies: the code for this example has been moved to here.
(Been meaning to update the article, but haven't had a chance...)
The apprtc.appspot example uses the Channel API on App Engine for signaling, but there are lots of other ways to do this. Signaling mechanisms are not defined by the WebRTC spec. (Note that signaling, which is accomplished via a signaling service, is the exchange of network and media metadata in order to set up a WebRTC 'call': the actual data is communicated directly between peers.)
We ran a codelab at Google I/O, which describes from start to finish how to build a video chat application that uses Socket.io on Node.js for signaling (it's very simple!) You might want to try that instead.

Resources