How to play H264 stream with SilverLight? - silverlight

I have an H264 stream (IIS - smooth streaming) that I would like to play with SilverLight. Apparently SilverLight can do it, but how?
Note: the VC-1 stream can be played by the SilverLight, but H264 not.Also, I can provide a stream and any additional information required. H264 encoder is the one in Media Foundation (MFT). Same goes for the VC-1 that works (although is it impossible to create equal chunks for smooth streaming because forcing key-frame insertion makes video jerky.EDIT: MPEG2VIDEOINFO values for H264:

Just a guess. Based on your question 18009152. I am guessing you are encoding h.264 using the annexb bitstream format. According to comments, you can not tell the encoder to use AVCC format. Therefore, you must perform this conversion manually (Annex B WILL NOT work in an ISO container). You can do this by looking for start codes in your AVC stream. A start code is 3 or 4 bytes (0x000001, 0x00000001). You get the length of the NALU by locating the next start code, or the end of the stream. Strip the start code (throw it away) and in its place write the size of the NALU in a 32bit integer big endian. Then write this data to the container. Just to be clear, this is performed on the video frames that come out of the encoder. The extra data is a separate step that appears you have mostly figure out (except for the NALUSizeLength). Because we uses a 4 byte integer to write the NALU sizes, you MUST set NALUSizeLength to 4.

Silverlight 3 can play H264 files. Use MediaStreamSource for this.
Here is the interface description: http://msdn.microsoft.com/en-us/library/system.windows.media.mediastreamsource(v=vs.95).aspx
Also, this blog entry is related to H264 playing sing Silverlight 3: http://nonsenseinbasic.blogspot.ru/2011/05/silverlights-mediastreamsource-some.html
It will help you with other issues that may arise.

Related

Libav (ffmpeg) what is the purpose of a container codec timebase and also a stream timebase?

I saw this answer, Libav (ffmpeg) copying decoded video timestamps to encoder
But I still don't understand why we need both the stream timebase and the codec timebase. Currently I'm trying to write some code that determines the time at which a frame is shown in a video from my decoder so I think the right way to do that is like this
aVFrame.best_effort_timestamp * stream.time_base.num * stream.time_base.den is that correct?
"why we need both" is a loaded statement. We don't NEED both. Your question should be why do we HAVE both.
This is not an ffmpeg/libav invention, it is a side effect of how media files work. Some (but not all) codecs have a mechanism for encoding a time base into the codec bitstream (for example h.264). These bitstreams can then be written/muxed to a container (for example mp4) that also encodes a timebase. In theory these should match, but in practice they often do not. libav is just parsing the file and populating the structs with what is there.

how to convert jpegs to video with fixed fps?

I have a series of jpegs,I would like to pack and compress them to a Video.
I use tool mpeg streamclip, but it double the whole play time.
If I have 300 jpegs, set fixed fps 30, I expect to get a video of 10s length . but using stream clip I get a 20s long video.
One answer is to get someone who understands programming. The programming APIs (application interfaces, the way client programs call libraries) to the lig libraries like ffmeg have ways in which frame rate can be controlled, and it's usually quite a simple matter to modify a program to produce fewer intermediate frames if you are creating a video from a list of JPEGs.
But the best answer is probably to find a tool that supports what you want to do. That's not a question to ask a programmer especially. Ask someone who knows about video editing. (It would take me about two days to write such a tool from scratch on top of my own JPEG codec and ffmpeg, so obviously I can't do it in response to this question, but that's roughly the level of work you're looking at).

Where to find stereo test sequences in yuv

Where can I find YUV stereo test sequences? I have to use it for disparity compensation so I'll be needing the seprate left and right views. Thank you!
There are at least 2 digital libraries, which contain uncompressed stereo video in YUV 4:2:0 with separate-left-right-views format:
http://sp.cs.tut.fi/mobile3dtv/stereo-video/
FTP-download only, video resolution for YUVs is ~480x270. These sequences are not copyright-free, so contacting license file is useful.
http://nma.web.nitech.ac.jp/fukushima/multiview/multiview.html
Here you need 'Multiview sequence 1' and 'Multiview sequence 2'
HTTP-download possible. These sequences are actually multiview, it means they contain not only left and right views, but more of them. Any 2 views from there will be fine as stereo video.
RMIT3dv library may also be useful, but it contains uncompressed YUV 4:2:2 video in .MOV format, so you will need to perform some format conversion to get .yuv files. But video quality on this site is way better.
Link to this library will be the first answer to 'RMIT3dv' request to google.

last frame oddity while reading/ writing AVIs with OpenCV

I've been using OpenCV for quite some time now and I always ignored more or less an oddity that occurs while writing AVIs with OpenCV commands. But now I need it for another purpose and it has to be accurate.
When I read a XVID compressed AVI with cvCaptureFromFile (or FromAVI) and then write the frames with cvVideoWriter (choosing XVID compression from the W32 menu) the resulting AVI always lacks the last frame of the original vid. That frame is also ignored while reading unless the input vid is an uncompressed AVI but in that case when I choose uncompressed (or a codec) for saving the last frame makes trouble and the program aborts leaving no readable AVI file.
What can I do about it, anyone know?
Cheers
Stephan
1) Upgrade to the newest OpenCV available and try again.
2) If that doesn't work, you'll have to choose another multimedia framework to read the frames: ffmpeg or gstreamer.
That's all I can think right now.

How to use MediaStreamSource to play h264 frames coming from a matroska file?

I'm trying to render frames coming from an mkv h264 file in silverlight 3 by using the MediaStreamSource.
Parsing the mkv file is fine, but I'm struggling with the expected values for CodecPrivateData in SL, which has to be a string, while the PrivateData info from mkv is a binary element.
Also, I'm not sure about in which form the frames should be given to SL (ie, the way they are stored in mkv / mp4, or transcoded as NALU)
Would anyone have any info on this?
After similar problems of my own and much head-scratching, I am able to answer this question.
In ReportOpenMediaCompleted(), when setting up your video stream description, you can ignore the CodecPrivateData attribute string, despite what the documentation says. It's not required. (assuming your stream of NAL units includes SPS and PPS units)
You should send one NAL unit back to the MediaElement for each GetSampleAsync() request.
This includes non-picture NAL units, e.g. SPS / PPS units.
When you send your NAL units, ensure there are 3-byte start codes (0x00 0x00 0x01) at the beginning of each one. (This is similar to 'Annex B' format, but not quite the same thing)
In ReportGetSampleCompleted(), set the value of 'Offset' equal to the beginning of the NAL start code, not the actual data. (in most cases this will be zero, assuming you use a fresh stream per NAL unit)
I have blogged a little about the experience here and hope to blog more.
According to the documentation the Codec private data should be set to 00000001 + sps + 00000001 + pps. However the documentation is wrong the value of CodecPrivateData seems to be completely ignored. Instead you need to pass the SPS and PPS NALS (with an annex b header of course) as the first and second result of GetSampleAsync.
For regular media samples normal 4 byte annex b headers headers work just fine
The CodecPrivateData is the contents of the 'avcC' atom which is a child of the 'stsd' atom in an MP4 file. You have to convert the binary data to a string. It will look something like this: "014D401FFFE10017674D401F925402802DD0800000030080000018478C195001000468EE32C8"
You also have to replace the mkv/mp4 lengths to NALU. I've written a little about this (to get Smooth Streaming to work for H.264 files).
Regards,
See: Smooth Streaming H264

Resources