I'm studying the feasibility of using IIS Smooth Streaming to build a rich web application to display time synchronized audio/video and related textual data. The text data is a set of spacecraft telemetry that should be displayed outside of the video window.
I've seen some examples of how to display video captions at the correct time using this technology. I've also read Chapter 11 of Silverlight Recipes 2nd edition which again shows video captions. What I'm trying to do is a little bit more complex - show numerous pieces of data outside of the video window. The data should be in sync with the video playhead at all times.
It looks like I will encode my text data in tracks using the StreamIndex element type, with Type="text" and SubType="data". On the client, how do I display this in a separate panel next to the video?
From the marketese I know IIS Smooth Streaming must handle this scenario. I'm just having no luck finding examples of how to handle the data on the client side. Can someone point me to an example, or tell me why this is not possible?
Related
I'm looking for sample code, or getting started to perform a "green screen" with the latest Azure Kinect DK.
How should I proceed to build and display a color stream with ONLY body area ?
Is it possible to avoid using body stream ? Because it require a stronger (NVidia computer)
We just published the new Green screen code sample as part of our GitHub open source repo microsoft/Azure-Kinect-Sensor-SDK.
You can find more information at green screen example
If you have any questions about the code, you can open GitHub issue.
I just received my Azure Kinect device recently and have not yet tried a few things I am interested in so this reply comes not from direct experience, however sample code and the SDK's indicate there may be a viable approach.
You can always try to implement traditional color-recognition algorithms, but if your usage scenario allows it, you can use data from the depth camera to filter only data within a depth range, with the "green screen" being out of range. You can then correlate pixels from the depth camera to the RGB image data to pick out data from the color stream that is within a certain depth range. Also, the background does not have to be a real green screen, but rather just has to be outside of the filtered depth range.
This approach allows you to use the sensor SDK, not requiring the body tracking SDK and its associated GPU requirements.
I am running an apache2 graphite host from an Orange Pi One, having written a service to translate and send data from sensors on the GPIO to the carbon line receiver. My project is to incorporate all the I/O from the device into a dashboard.
There are loads of graphite dashboards, but I can't find one that has a simple video stream applet/plugin.
I have searched graphite-web github and can easily adapt dashboard.html, but I am not sure whether the entire file is a placeholder, and whether any additions would render properly after all the javascript has run and rendered the page. It would seem I might need to reverse engineer the javascript, which seems quite an effort for the simple task I want.
If I can figure out the video stream code for the CSI camera, then I can adapt it to modify the dashboard with all the other data I want to display.
So, I am really looking for some guidance on getting started with dashboard code modification?
You can use the text panel to add HTML content to the dashboard.
I have a simple digital signage solution with a presentation application in WPF. I would like to "monitor" it from my remote machine. I would like to send a stream of the content the application is showing now (images, video, userControls, etc.).
How to do this, do i need to manually take a screenshot and send it in a video stream to my monitor - how to encode it into a stream the monitoring application can playback (that one is also WPF).
Ok, if snapshots are ok then I've done this before. The way I did it was to take a "screenshot" of the app (using code you can find here: http://www.grumpydev.com/2009/01/03/taking-wpf-screenshots/ ) then have the signage app spin up a webservice (HttpListener, WCF or SelfHosted Nancy) that returns the current screen whenever a request is made to a particular url. You monitor app then polls that url however often you need to.
This was done to monitor an interactive game for a Surface device, and didn't seem to cause any perf issues, so should be fine for your needs.
I want to build a Silverlight live feed viewer for an IP camera with a proprietary RTP server, i.e. no IIS, no SmoothStreaming extension. Is SmoothStreamingClient (or microsoft media platform) is the best place to start?
You definitely don't want the SmoothStreamingClient, as that assumes that you're using a SmoothStreaming media source. However, what you can do instead is use a MediaElement and implement your own MediaStreamSource. This requires that you know how to parse the data being spewed by your IP camera and turn it into valid video samples, which is non-trivial, but it's the only supplied mechanism for displaying video data for which there isn't already a built-in streaming source.
However, if the video format that your IP camera sends is already supported by Silverlight, then all you need to do is create a Stream that reads the camera data and pass that as the media source to a MediaElement.
Best way is to have some server-side app that gets the camera data and saves a picture at a certain location on the web server. Then you can have an HTML page refresh periodically to show the new image (trick is to give a url of the style http://someserver/someimage.jpg?dummy=i, where you replace i with a number that changes every time (put a big random number or the current datetime), so that the browser doesn't cache and show the previously downloaded frame all the time
Suppose there is a Silverlight streaming video player on a random web site. How can I intercept the video stream and for example save it to file - i.e. the real source of the file.
I know some of the sites embed the source in tag - or at least that was the case with Flash. But sometimes, players are smarter than that and call some logic via web service. It is still possible to figure everything out by analyzing the .dll with reflector, but that is hardcore! Every player may have a different logic, so I figured out it would be easier to just get the current stream somehow.
Any thoughts?
Ooook! Got me an answer that could be used as a nice workaround. With the use of fiddler I was able to capture the traffic and figure out what's going on. Now I'm happily watchin the same video as before only using the uber feature of WMP that lets me play videos faster.