gst_h264_parser_identify_nalu Not Finding NAL in Buffer - c

GstH264NalParser *parser = NULL;
GstH264NalUnit nal_unit = { 0 };
parser = gst_h264_nal_parser_new();
GstH264ParserResult parser_result = gst_h264_parser_identify_nalu(parser,
buffer_map.data,
0,
buffer_map.size,
&nal_unit); /* This returns GST_H264_PARSER_NO_NAL */
Why is that? Unless data is not supposed to come from a GstMapInfo* but some other data structure. A GstStructure pointer from a GstSample, perhaps?
Context
Writing a small program that parses h.264 encoded video from Gstreamer's videotestsrc and appsink plug-ins. So far, so good.
Using the (bad) x264enc plug-in in my pipeline to convert the stream before feeding it into an h264parse, then into appsink. Pretty sure the h264parse is an unnecessary step, but I get the same results with and without.
Convinced that am using incorrect struct to read data into NALU parse function.

If you believe your incoming data is good, odds are you need to do a small conversion because in h264 streams there are a few different modes of encoding.
I'm not sure why that is, but you sometimes need to do a small conversion. That is what the h264parse element is for.
Pad Templates:
SRC template: 'src'
Availability: Always
Capabilities:
video/x-h264
parsed: true
stream-format: { avc, avc3, byte-stream }
alignment: { au, nal }
So in your pipeline you might try permutations on the stream-format and alignmnet options, such as:
gst-launch videotestsrc ! ... ! h264parse ! video/x-h264,stream-format=byte-stream,alignment=nal ! appsink

Related

How to play audio with gstreamer in C?

I'm trying to play audio with Gstreamer in C.
I'm using Laptop, Ubuntu 16.
To play I use this pipeline and it's working:
gst-launch-1.0 filesrc location=lambo-engine.mp3 ! decodebin ! audioconvert ! autoaudiosink
But when I convert it to C:
GstElement *pipeline, *source, *decode, *audioconvert, *sink;
GMainLoop *loop;
GstBus *bus;
GstMessage *msg;
int main(int argc, char *argv[]) {
/* Initialize GStreamer */
gst_init (&argc, &argv);
/* Create the elements */
source = gst_element_factory_make("filesrc", NULL);
decode = gst_element_factory_make("decodebin", NULL);
audioconvert = gst_element_factory_make("audioconvert", NULL);
sink = gst_element_factory_make("autoaudiosink", NULL);
// Set parameters for some elements
g_object_set(G_OBJECT(source), "location", "lambo-engine.mp3", NULL);
/* Create the empty pipeline */
pipeline = gst_pipeline_new ("pipeline");
/* Build the pipeline */
gst_bin_add_many (GST_BIN (pipeline), source, decode, audioconvert, sink, NULL);
if (gst_element_link_many(source, decode, audioconvert, sink, NULL) != TRUE){
g_error("Failed to link save elements!");
gst_object_unref (pipeline);
return -1;
}
/* Start playing */
gst_element_set_state (pipeline, GST_STATE_PLAYING);
bus = gst_element_get_bus (pipeline);
gst_object_unref (bus);
/* now run */
g_main_loop_run (loop);
/* Free pipeline */
gst_element_set_state (pipeline, GST_STATE_NULL);
gst_object_unref (GST_OBJECT(pipeline));
return 0;
}
I can built it success. But when I run it, it return error can't link elements:
** (example:2516): ERROR **: 22:59:42.310: Failed to link save elements!
Trace/breakpoint trap (core dumped)
Someone helps me to figure out error, please.
Thank you so much
Gstreamer is centered around pipelines that are lists of elements. Elements have pads to exchange data on. In your example, decodebin has an output pad and audioconvert has an input pad. At the start of the pipeline, the pads need to be linked.
This is when pads agree on the format of data, as well as some other information, such as who's in charge of timing and maybe some more format specifics.
Your problem arises from the fact that decodebin is not actually a real element. At runtime, when filesrc starts up, it tells decodebin what pad it has, and decodebin internally creates elements to handle that file.
For example:
filesrc location=test.mp4 ! decodebin would run in this order:
delay linking because types are unknown
start filesrc
filesrc says "trying to link, I have a pad of format MP4(h264)
decodebin sees this request, and in turn, creates on the fly a h264 parse element that can handle the mp4 file
decodebin now has enough information to describe it's pads, and it links the rest of the pipeline
video starts playing
Because you are using c to do this, you link the pipeline before filesrc loads the file. This means that decodebin doesn't know the format of it's pads at startup, and therefore fails to link.
To fix this, you have two options:
1.) Swap out decodebin for something that supports only one type. If you know your videos will always be mp4s with h264, for example, you can use h264parse instead of decodebin. Because h264parse only works with one type of format, it knows it's pad formats at the start, and will be able to link without issue.
2.) Reimplement the smart delaying linking. You can read the docs for more info, but you can delay linking of the pipeline, and install callbacks to complete the linking when there's enough information. This is what gst-launch-1.0 is doing under the hood. This has the benefit of being more flexible: anything support by decodebin will work. The downside is that it's much more complex, involves a nontrivial amount of work on your end, and is more fragile. If you can get away with it, try fix 1

How to embed subtitles into an mp4 file using gstreamer

My Goal
I'm trying to embed subtitles into an mp4 file using the mp4mux gstreamer element.
What I've tried
The pipeline I would like to use is:
GST_DEBUG=3 gst-launch-1.0 filesrc location=sample-nosub-avc.mp4 ! qtdemux ! queue ! video/x-h264 ! mp4mux name=mux reserved-moov-update-period=1000 ! filesink location=output.mp4 filesrc location=english.srt ! subparse ! queue ! text/x-raw,format=utf8 ! mux.subtitle_0
It just demuxes a sample mp4 file for the h.264 stream and then muxes it together with an srt subtitle file.
The error I get is:
Setting pipeline to PAUSED ...
0:00:00.009958915 1324869 0x5624a8c7a0a0 WARN basesrc gstbasesrc.c:3600:gst_base_src_start_complete:<filesrc0> pad not activated yet
Pipeline is PREROLLING ...
0:00:00.010128080 1324869 0x5624a8c53de0 WARN basesrc gstbasesrc.c:3072:gst_base_src_loop:<filesrc1> error: Internal data stream error.
0:00:00.010129102 1324869 0x5624a8c53e40 WARN qtdemux qtdemux_types.c:239:qtdemux_type_get: unknown QuickTime node type pasp
0:00:00.010140810 1324869 0x5624a8c53de0 WARN basesrc gstbasesrc.c:3072:gst_base_src_loop:<filesrc1> error: streaming stopped, reason not-negotiated (-4)
0:00:00.010172990 1324869 0x5624a8c53e40 WARN qtdemux qtdemux.c:3237:qtdemux_parse_trex:<qtdemux0> failed to find fragment defaults for stream 1
ERROR: from element /GstPipeline:pipeline0/GstFileSrc:filesrc1: Internal data stream error.
Additional debug info:
gstbasesrc.c(3072): gst_base_src_loop (): /GstPipeline:pipeline0/GstFileSrc:filesrc1:
streaming stopped, reason not-negotiated (-4)
ERROR: pipeline doesn't want to preroll.
Setting pipeline to NULL ...
Freeing pipeline ...
My Thoughts
I believe the issue is not related to the above warning but rather mp4mux's incompatibility with srt subtitles.
The reason I belive this is because, other debug logs hint at it, but also stealing the subititles from another mp4 file and muxing it back together does work.
gst-launch-1.0 filesrc location=sample-nosub-avc.mp4 ! qtdemux ! mp4mux name=mux ! filesink location=output.mp4 filesrc location=sample-with-subs.mp4 ! qtdemux name=demux demux.subtitle_1 ! text/x-raw,format=utf8 ! queue ! mux.subtitle_0
A major catch 22 I am having is that mp4 files don't typically support srt subtitles, but gstreamer's subparse element doesn't support parsing mp4 subtitle formats (tx3g, ttxt, etc.) so I'm not sure how I'm meant to put it all together.
I'm very sorry for the lengthy question but I've tried many things so it was difficult to condense it. Any hints or help is appreciated. Thank you.

GStreamer: Pipeline to write data to file based on condition

Is there a way to write data in a GStreamer pipeline to a file based on an (external) condition?
I have an application/code, which streams/displays video to the screen and continuously writes it to a file (it works fine).
I would like to have the GStreamer pipeline to only write to a file if an external condition is true (at runtime - I don't know the condition in advance).
What I have done so far:
I carefully searched the official GStreamer documentation, where I found some information on appsink, but I don't really see a way how to apply it based on an (external) conditional.
I also used 'dynamic pipelines' as a search term, which seems describe the modification of GStreamer pipelines based on conditions.
I also searched the GStreamer mailing list and found this post, which uses the gst_element_set_locked_state() function.
I added a
if (condition) {
gst_element_set_locked_state(videosink, 'TRUE');
} else {
gst_element_set_locked_state(videosink, 'FALSE');
}
to my code by then the pipeline would not work at all (displaying a black image).
Another way is described on https://coaxion.net/blog/2014/01/gstreamer-dynamic-pipelines/ in Example 2 with the corresponding code being available on GitHub (https://github.com/sdroege/gst-snippets/blob/217ae015aaddfe3f7aa66ffc936ce93401fca04e/dynamic-tee-vsink.c).
It seems to use a callback and the gst_element_set_state (sink->sink, GST_STATE_NULL) function call to write to a file based on an (external) condition.
Applying this function in analogy to the function above causes the pipeline to display find, but also results in continuous (and not conditional) output to a file:
if (condition) {
gst_element_set_state(videosink, GST_STATE_PLAYING);
} else {
gst_element_set_state(videosink, GST_STATE_NULL);
}
Also gst_pad_add_probe () could be a possibility to dynamically change output to a file, but despite having loocked in the GStreamer documentation, I don't know how use this function correctly.
For your requirement you need tee and valve elements.
Tee will seperate the pipeline for both displaying to window and writing to a file. Valve is the condition you are looking. Its drop attribute drops the frame where the valve is.
Your pipeline will be like:
gst-launch-1.0 ksvideosrc ! videoconvert ! tee name=t ! queue ! valve drop=false ! autovideosink t. ! queue ! valve drop=false ! openh264enc ! h264parse ! mp4mux ! filesink location="test.mp4" -v --eos-on-shutdown
When your condition occurres, set your specific valve's drop attribute as true for not continuing to write file.
In C/C++:
if(condition)
g_object_set(videoValve,"drop",true,nullptr);
else
g_object_set(videoValve,"drop",false,nullptr);
WARNING:
Valve elements must be false until data will pass inside everything in the pipeline. Which means, you can set valve's drop attribute as true when the pipeline is on PLAYING State. You can adjust your code accordingly such as trigger the mechanism on BusCallback, you can reach pipeline states inside that.
Note: ksvideosrc (Windows) if you use Unix try v4lsrc.
If you build your application like this, it will work, I use similar scenario.

Error in calculating pps for h264 rtp stream

I have a problem in my Gstreamer pipeline that causes the sprop-parameter-sets to (i think) overflow its buffer. I am doing this in a iMX6 board and my pipeline is appsrc format=3 ! imxvpuenc_h264 ! rtph264pay and I use an RTSP server for accessing the pipeline. The pipeline works if a static image is sent, but in the case of a video it stops working by calculating the wrong pps.
I have tried using a static sprop-parameter-sets for rtph264pay by setting its property, but in this case the same thing happens in rtph264depay that calculates a new sprop-parameter-set. The output from the caps creation can be seen below:
0:01:15.970217009 578 0xa482ad50 INFO GST_EVENT gstevent.c:809:gst_event_new_caps: creating caps event application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, packetization-mode=(string)1, sprop-parameter-sets=(string)"Z0JAIKaAUAIGQAA\=\,aM48gP94AAIS4AAg2AACAudABxMbtz5ZqJ6U4vk7wAAQMgABAOgA5R6ZQkwQNaTPhfwAQAAgjAACD54YHcvx9FXG9ON62mcABAAFAAEAYbX2rm8Qe4mSKvXrwAAQBgACNJAZdcgDiEnNE5djN4GAAIJhoAKAEnAmvb0KVFQMwyGTwAAi4AIgBINIKIds1udUngAAgcAACAWS1IEgBehG7wDL75/W5JRBIi0WrX8gABAsAAEA0DVsAnpAKiCjVLNdK8AAEJ4AEAc/YVCfjDJO+t73KSd4AAII4AAgpAACAWwBo6CGMh3HueozX+Z4AAIJgAAgOgD2gYFqlGlGBjWn1MULXgAAg5AACAkEA8JLN5OJHLJcZmDo+eAACC8AAIDoAMAGGzM8zzGmJZwKeFL8AAQAAgKhbICDBChH5BKlw+PuMscAACACAAcACA3uGjeSK7gZZzT+NH/ewABDWAAEEQsALG1gYcE5FEbXp1hW8DAcAAQBQAnNfkbKQ/Pc/I9SGjgAwABAXAGdyJu7gpKxj9M5ERP/eAA6MAAIBgopwP8Sbdqzl4CjgAAQMwABAAAHALgpUcLtczR+Yjocj/eBgACC0YACtjKAXenmNmgRczT4AAIF4AAgDgAEASJqHnyzxQfCXUdO3gAAgoAACBgaSADVwoxVTFA7X0vaZsnexAU7CW/gAAgvAAQoABAXFGq3qUtmUv9VYp8AACCaEAIA7Bmj1M+lA7...
and this continues on for about a hundred or more lines and the device crashes if the pipeline isn't stopped. This should have only a few more characters after the first comma. Can someone tell why this would happen and provide a solution?

Extracting I-Frames from H264 in MPEG-TS in C

I am experimenting with video and would like to know how I can extract I-frames from H264 contained in MPEG-TS container.
What I want to do is generate preview images out of a video stream.
As the I-frame is supposed to be a complete picture fro which P- and B-Frames derive, is there a possibility to just extract the data of the picture without having to decode it using a codec?
I have already done some work with MPEG-TS container format but I am not that much specialized in codecs.
I am rather in search of information.
Thanks a lot.
I am no expert in this domain but I believe the answer to your question is NO.
If you want to save the I-frame as a JPEG image, you still need to "transcode" the video frame i.e. you first need to decode the I-frame using a H264 decoder and then encode it using a JPEG encoder. This is so because the JPEG encoder does not understand a H264 frame, it only accepts uncompressed video frames as input.
As an aside, since the input to the JPEG encoder is an uncompressed frame, you can generate a JPEG image from any type of frame (I/P/B) as it would already be decoded (using reference I frame, if needed) before feeding to the encoder.
As others have noted decoding h.264 is complicated. You could write your own decoder but it is a major effort. Why not use an existing decoder?
Intel's IPP library has the basic building blocks for a decoder and a sample decoer:
Code Samples for the IntelĀ® Integrated Performance Primitives
There's libavcodec:
Using libavformat and libavcodec
Revised avcodec_sample.0.4.9.cPP
I am not expert in this domain too. But I've played with decoding. Use this gstreamer pipeline to extract preview from video.mp4:
gst-launch -v filesrc location=./video.mp4 ! qtdemux name=demux demux.video_00 ! ffdec_h264 ! videorate ! 'video/x-raw-yuv,framerate=1/1' ! jpegenc ! multifilesink location=image-%05d.jpeg
If you want to write some code, replace videorate with appsrc/appsink elements. Write control program to the pipelines (see example):
filesrc location=./video.mp4 ! qtdemux name=demux demux.video_00 ! ffdec_h264 ! appsink
appsrc ! 'video/x-raw-yuv,framerate=1/1' ! jpegenc ! multifilesink location=image-%05d.jpeg
Buffers without GST_BUFFER_FLAG_DELTA_UNIT flag set is I-frames. You can safely skip many frames and start decoding stream at any I-frame.

Resources