GStreamer qtmux/mp4mux command to code converting - c

I'm using qtmux to merge audio and video to mp4 container file with GStreamer. My pipeline looks like:
gst-launch-1.0 autovideosrc ! x264enc ! queue ! qtmux0. autoaudiosrc! wavenc ! queue ! qtmux ! filesink location=file.mp4
videotestsrc --> x264enc -----\
>---> qtmux ---> filesink
audiotestsrc --> wavenc ------/
It's working good with commandline. But I want to code it in C code. I was stuck in this part:
x264enc -----\
>---> qtmux
wavenc ------/
This is codes for this part.
gst_element_link_many(audiosource, wavenc, audioqueue, NULL);
gst_element_link_many(videosource, x264enc, videoqueue, NULL);
gst_element_link_many(qtmux, filesink, NULL);
audio_pad = gst_element_get_request_pad (audioqueue, "src");
mux_audio_pad = gst_element_get_static_pad (qtmux, "sink_1");
gst_pad_link (audio_pad,mux_audio_pad); **# ERROR HERE**
video_pad = gst_element_get_request_pad (videoqueue, "src");
mux_video_pad = gst_element_get_static_pad(qtmux, "sink_2");
gst_pad_link (video_pad,mux_video_pad); **# ERROR HERE**
But it's wrong in step link pads. And the error type: GST_PAD_LINK_NOFORMAT (-4) – pads do not have common format
How can I fix it ?

I think you have switches request/static pad calls here. The queue should have static pads while the muxer has request pads.
You can also make your life easier by using gst_parse_launch() function to create a pipeline as you do on the command line therefore saving a lot of error prone code.

Related

Convert gstreamer command to C code . . request pad

I'm developing on Jetson Nano, and my goal is to output two overlapping videos.
Here is the command I wrote first.
gst-launch-1.0 v4l2src device=/dev/video0 io-mode=2 ! image/jpeg, width=1920, height=1080 ! nvjpegdec ! video/x-raw ! queue ! nvvidconv ! 'video/x-raw(memory:NVMM),format=RGBA' ! queue ! comp.sink_0 \
v4l2src device=/dev/video1 io-mode=2 ! image/jpeg, width=1920, height=1080 ! nvjpegdec ! video/x-raw ! queue ! nvvidconv ! 'video/x-raw(memory:NVMM),format=RGBA' ! queue ! comp.sink_1 \
alsasrc device="hw:1" ! audioconvert ! audioresample ! audiorate ! "audio/x-raw, rate=48000, channels=2" ! queue ! faac bitrate=128000 rate-control=2 ! queue ! muxer. \
nvcompositor name=comp sink_0::xpos=0 sink_0::ypos=0 sink_0::width=768 sink_0::height=432 sink_0::zorder=2 sink_0::alpha=1.0 sink_1::xpos=0 sink_1::ypos=0 sink_1::width=1920 sink_1::height=1080 sink_1::zorder=1 sink_1::alpha=1.0 ! \
'video/x-raw(memory:NVMM),format=RGBA' ! nvvidconv ! 'video/x-raw(memory:NVMM),format=(string)I420' ! nvv4l2h265enc ! h265parse ! queue ! mpegtsmux name=muxer ! filesink location=test.mp4 \
This is my Command. and This is My C Code
gst_bin_add_many (GST_BIN (gstContext->pipeline),
gstContext->videosrc, gstContext->video_filter1, gstContext->jpegdec, gstContext->x_raw, gstContext->queue_video1, gstContext->video_convert2, gstContext->video_filter4, gstContext->queue_video2 ,
gstContext->videosrc2, gstContext->video_filter2_1, gstContext->jpegdec2, gstContext->x_raw2, gstContext->queue_video2_1, gstContext->video_convert3, gstContext->video_filter2_3, gstContext->queue_video2_2 ,
gstContext->video_mixer , gstContext->video_filter3, gstContext->video_convert, gstContext->video_filter2, gstContext->video_encoder, gstContext->video_pasre, gstContext->queue_video3, gstContext->muxer,gstContext->sink2, NULL);
if(
!gst_element_link_many(gstContext->video_mixer, gstContext->video_filter3, gstContext->video_convert, gstContext->video_filter2, gstContext->video_encoder, gstContext->video_pasre, gstContext-> queue_video3, gstContext->muxer, NULL)
|| !gst_element_link_many(gstContext->videosrc, gstContext->video_filter1, gstContext->jpegdec, gstContext->x_raw, gstContext->queue_video1, gstContext->video_convert2, gstContext->video_filter4, gstContext->queue_video2 ,NULL)
|| !gst_element_link_many(gstContext->videosrc2, gstContext->video_filter2_1, gstContext->jpegdec2, gstContext->x_raw2, gstContext->queue_video2_1, gstContext->video_convert3, gstContext->video_filter2_3, gstContext->queue_video2_2 , NULL)
)
{
g_error("Failed to link elementsses!#!#!#!#");
pthread_mutex_unlock(&context->lock);
return -2;
}
queue_video2 = gst_element_get_static_pad (gstContext->queue_video2, "src");
queue_video2_2 = gst_element_get_static_pad (gstContext->queue_video2_2, "src");
mixer2_sinkpad = gst_element_get_request_pad (gstContext->video_mixer, "sink_%u");
mixer1_sinkpad = gst_element_get_request_pad (gstContext->video_mixer, "sink_%u");
if (gst_pad_link (queue_video2, mixer2_sinkpad) != GST_PAD_LINK_OK ||
gst_pad_link (queue_video2_2, mixer1_sinkpad) != GST_PAD_LINK_OK) {
g_printerr ("\n\n\n source0 and mixer pads could not be linked22222222222.\n\n\n");
gst_object_unref (gstContext->pipeline);
return -1;
}
g_object_unref(queue_video2);
g_object_unref(queue_video2_2);
And Log
0:00:00.926466171 968 0x7f54324050 FIXME videodecoder gstvideodecoder.c:933:gst_video_decoder_drain_out:<nvjpegdec0> Sub-class should implement drain()
0:00:00.979159196 968 0x7f54324050 WARN basesrc gstbasesrc.c:3055:gst_base_src_loop:<videosrc1> error: Internal data stream error.
0:00:00.979208884 968 0x7f54324050 WARN basesrc gstbasesrc.c:3055:gst_base_src_loop:<videosrc1> error: streaming stopped, reason not-linked (-1)
0:00:00.979354772 968 0x7f54324050 WARN queue gstqueue.c:988:gst_queue_handle_sink_event:<queue_video1> error: Internal data stream error.
0:00:00.979391699 968 0x7f54324050 WARN queue gstqueue.c:988:gst_queue_handle_sink_event:<queue_video1> error: streaming stopped, reason not-linked (-1)
ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:videosrc1: Internal data stream error.
Additional debug info1:
gstbasesrc.c(3055): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:videosrc1:
I connected the elements using requestpad. But why am I getting this error?
Please let me know if I've done anything wrong
The command works as soon as you run it.
thank you

Modify data in a stream using GStreamer

I've been looking for a way to modify a GStreamer stream in order to display the quality (by showing the raw video next to the result of the subtraction between the raw video and the enc/dec stream for example).
I've been learning GStreamer for a week and so far I was able to do what I was asked but right now I'm stuck.
I looked into the compositor element that seems to mix streams but I'm pretty sure it cannot do what I need.
Then I checked appsrc and appsink in some code. I tried to build a pipeline: filesrc - appsink - appsrc - filesink. But for obvious reasons it did not work. I browsed github projects but most of appsrc/appsink uses were just to programmaticaly do a task like reading a file.
Lastly I found someone with the same problem like me. He "solved" it by creating 2 pipelines (filesrc - appsink & appsrc - filesink) but he still got allocation errors. I was not even able to run the code he shared.
Does anyone have any idea on how to get it done in a clean way?
I found a way to modify a stream.
And it basically was to create my own plugin because by creating a element I can modify the stream between the source pad and the sink pad of my element.
If someone is interrested there is the documentation explaining how to create a plugin and here's my chain function where I modify the data between the pads:
static GstFlowReturn
gst_my_filter_chain (GstPad * pad, GstObject * parent, GstBuffer * buf)
{
GstMyFilter *filter;
filter = GST_MYFILTER (parent);
if (filter->silent == FALSE) {
GstMapInfo info;
if(gst_buffer_map(buf, &info, GST_MAP_READWRITE)) {
guint8* offset = info.data;
int i = 0;
while(i < info.maxsize) {
guint8* pos = offset + i;
guint8 c = 100; // Value that will be added to make the color lighter
int cc = (int) (*pos + c); // Adding the value to the color
guint8 color;
if(cc > 255) { // Making sure the color stays valid
color = 255;
}
else {
color = (guint8) cc;
}
memset(pos, color, 1); // Setting the value
i++;
}
}
gst_buffer_unmap(buf, &info);
}
return gst_pad_push (filter->srcpad, buf);
}
It's quite rudimentary and only lighten the video's colors but I can modify a stream so it's only a matter of time before I get the thing I want done.
Just hope it'll help someone
Compositor may be the simplest solution for what you're trying. You would have to use tee in order to fork into 2 sub-pipelines, one for direct video, and another one going thru encoding/decoding (here using jpeg), and compose both:
gst-launch-1.0 videotestsrc ! video/x-raw,width=640,height=480,framerate=30/1 ! tee name=t t. ! queue ! comp.sink_0 t. ! queue ! jpegenc ! jpegdec ! comp.sink_1 compositor name=comp sink_0::xpos=0 sink_1::xpos=640 ! autovideosink

How to get h264 frames via gstreamer

I'm familiar with ffmpeg, but not with GStreamer. I know how to get a H264 frame through ffmpeg, for example, I can get a H264 frame through AVPacket. But I don't know how to use GStreamer to get a frame of h264. I don't intend to save the H264 data directly as a local file because I need to do other processing. Can anyone give me some sample code? I'll be very grateful. Here's what I learned from other people's code.
#include <stdio.h>
#include <string.h>
#include <fstream>
#include <unistd.h>
#include <gst/gst.h>
#include <gst/app/gstappsrc.h>
typedef struct {
GstPipeline *pipeline;
GstAppSrc *src;
GstElement *filter1;
GstElement *encoder;
GstElement *filter2;
GstElement *parser;
GstElement *qtmux;
GstElement *sink;
GstClockTime timestamp;
guint sourceid;
} gst_app_t;
static gst_app_t gst_app;
int main()
{
gst_app_t *app = &gst_app;
GstStateChangeReturn state_ret;
gst_init(NULL, NULL); //Initialize Gstreamer
app->timestamp = 0; //Set timestamp to 0
//Create pipeline, and pipeline elements
app->pipeline = (GstPipeline*)gst_pipeline_new("mypipeline");
app->src = (GstAppSrc*)gst_element_factory_make("appsrc", "mysrc");
app->filter1 = gst_element_factory_make ("capsfilter", "myfilter1");
app->encoder = gst_element_factory_make ("omxh264enc", "myomx");
app->filter2 = gst_element_factory_make ("capsfilter", "myfilter2");
app->parser = gst_element_factory_make("h264parse" , "myparser");
app->qtmux = gst_element_factory_make("qtmux" , "mymux");
app->sink = gst_element_factory_make ("filesink" , NULL);
if( !app->pipeline ||
!app->src || !app->filter1 ||
!app->encoder || !app->filter2 ||
!app->parser || !app->qtmux ||
!app->sink ) {
printf("Error creating pipeline elements!\n");
exit(2);
}
//Attach elements to pipeline
gst_bin_add_many(
GST_BIN(app->pipeline),
(GstElement*)app->src,
app->filter1,
app->encoder,
app->filter2,
app->parser,
app->qtmux,
app->sink,
NULL);
//Set pipeline element attributes
g_object_set (app->src, "format", GST_FORMAT_TIME, NULL);
GstCaps *filtercaps1 = gst_caps_new_simple ("video/x-raw",
"format", G_TYPE_STRING, "I420",
"width", G_TYPE_INT, 1280,
"height", G_TYPE_INT, 720,
"framerate", GST_TYPE_FRACTION, 1, 1,
NULL);
g_object_set (G_OBJECT (app->filter1), "caps", filtercaps1, NULL);
GstCaps *filtercaps2 = gst_caps_new_simple ("video/x-h264",
"stream-format", G_TYPE_STRING, "byte-stream",
NULL);
g_object_set (G_OBJECT (app->filter2), "caps", filtercaps2, NULL);
g_object_set (G_OBJECT (app->sink), "location", "output.h264", NULL);
//Link elements together
g_assert( gst_element_link_many(
(GstElement*)app->src,
app->filter1,
app->encoder,
app->filter2,
app->parser,
app->qtmux,
app->sink,
NULL ) );
//Play the pipeline
state_ret = gst_element_set_state((GstElement*)app->pipeline, GST_STATE_PLAYING);
g_assert(state_ret == GST_STATE_CHANGE_ASYNC);
//Get a pointer to the test input
FILE *testfile = fopen("test.yuv", "rb");
g_assert(testfile != NULL);
//Push the data from buffer to gstpipeline 100 times
for(int i = 0; i < 100; i++) {
char* filebuffer = (char*)malloc (1382400); //Allocate memory for framebuffer
if (filebuffer == NULL) {printf("Memory error\n"); exit (2);} //Errorcheck
size_t bytesread = fread(filebuffer, 1 , (1382400), testfile); //Read to filebuffer
//printf("File Read: %zu bytes\n", bytesread);
GstBuffer *pushbuffer; //Actual databuffer
GstFlowReturn ret; //Return value
pushbuffer = gst_buffer_new_wrapped (filebuffer, 1382400); //Wrap the data
//Set frame timestamp
GST_BUFFER_PTS (pushbuffer) = app->timestamp;
GST_BUFFER_DTS (pushbuffer) = app->timestamp;
GST_BUFFER_DURATION (pushbuffer) = gst_util_uint64_scale_int (1, GST_SECOND, 1);
app->timestamp += GST_BUFFER_DURATION (pushbuffer);
//printf("Frame is at %lu\n", app->timestamp);
ret = gst_app_src_push_buffer( app->src, pushbuffer); //Push data into pipeline
g_assert(ret == GST_FLOW_OK);
}
usleep(100000);
//Declare end of stream
gst_app_src_end_of_stream (GST_APP_SRC (app->src));
printf("End Program.\n");
return 0;
}
Here is a link to the source of the code
link
Your example serves for the purpose of feeding the data from application to the GStreamer with a hope to encode with x264 and the result goes to file.
What you need (I am guessing here) is to read data from file - lets say movie.mp4 and get the decoded data into your application (?)
I believe you have two options:
1, Use appsink instead of filesink and feed the data from file using filesrc. So if you need also other processing beside grabbing the h264 frames (like playing or sending via network), you would have to use tee to split the pipeline into two output branches like example gst-launch below. One branch of output pipeline would go to for example windowed output - autovideosink and the other part would go to your application.
To demonstrate this split and still show you what is really happening I will use debugging element identity which is able to dump data which goes through it.
This way you will learn to use this handy tool for experiments and verification that you know what you are doing. This is not the solution you need.
gst-launch-1.0 -q filesrc location= movie.mp4 ! qtdemux name=qt ! video/x-h264 ! h264parse ! tee name=t t. ! queue ! avdec_h264 ! videoconvert ! autovideosink t. ! queue ! identity dump=1 ! fakesink sync=true
This pipeline plays the video into window (autovideosink) and the other branch of tee goes to the debugging element called identity which dumps the frame in hexdump manner (with addresses, character representation and everything).
So what you see in the stdout of the gst-launch are actual h264 frames (but you do not see boundaries or anything.. its just really raw dump).
To understand the gst-launch syntax (mainly the aliases with name=) check this part of the documentation.
In real code you would not use identity and fakesink but instead you would link there just appsink and connect the appsink signals to callbacks in your C source code.
There are nice examples for this, I will not attempt to give you complete solution. This example demonstrate how to get samples out of appsink.
The important bits are:
/* The appsink has received a buffer */
static GstFlowReturn new_sample (GstElement *sink, CustomData *data) {
GstSample *sample;
/* Retrieve the buffer */
g_signal_emit_by_name (sink, "pull-sample", &sample);
if (sample) {
/* The only thing we do in this example is print a * to indicate a received buffer */
g_print ("*");
gst_sample_unref (sample);
return GST_FLOW_OK;
}
return GST_FLOW_ERROR;
}
// somewhere in main()
// construction and linkage of elements
g_signal_connect (data.app_sink, "new-sample", G_CALLBACK (new_sample), &data);
2, Second solution is to use pad probe registered for buffers only. Pad probe is a way to register a callback on any pad of any element in pipeline and tell GStreamer in what information are you interested in on that probe. You can ask it to call the callback upon every event, or any downstream event, or on any buffer going through that probe. In the callback which pad probe calls you will extract the buffer and the actuall data in that buffer.
Again there are many examples on how to use pad probes.
One very nice example containing logic of almost exactly what you need can be found here
The important bits:
static GstPadProbeReturn
cb_have_data (GstPad *pad,
GstPadProbeInfo *info,
gpointer user_data)
{
// ... the code for writing the buffer data somewhere ..
}
// ... later in main()
pad = gst_element_get_static_pad (src, "src");
gst_pad_add_probe (pad, GST_PAD_PROBE_TYPE_BUFFER,
(GstPadProbeCallback) cb_have_data, NULL, NULL);

How to change brightness of video of a gstreamer pipeline dynamically

I am trying below code to change brightness of a video pipeline. I can see the video but brightness never changes although I am trying to change it every 60 seconds. Any idea what I am missing ?
static gboolean broadcasting_timeout_cb (gpointer user_data)
{
GstElement *vaapipostproc = NULL;
vaapipostproc = gst_bin_get_by_name(GST_BIN(broadcasting_pipeline),
"postproc");
if (vaapipostproc == NULL) {
fprintf(stderr, "unable to get vaapipostproc from broadcasting
pipeline\n");
return TRUE;
}
g_object_set (G_OBJECT (vaapipostproc), "brightness", -1.0, NULL);
fprintf(stderr, "brightness changed by -1.0\n");
return TRUE;
}
main() {
//pipeline code goes here and then below code comes //
broadcasting_pipeline = gst_parse_launch (compl_streaming_pipe, &error);
if (!broadcasting_pipeline) {
fprintf (stderr, "Parse error: %s\n", error->message);
exit (1);
}
loop_broadcasting = g_main_loop_new (NULL, FALSE);
g_timeout_add_seconds (60, broadcasting_timeout_cb, loop_broadcasting);
gst_element_set_state (broadcasting_pipeline, GST_STATE_PLAYING);
g_main_loop_run(loop_broadcasting);
// rest of the code for main function comes here
}
It seems that vaapipostproc properties like brightness etc can't be change dynamically during runtime !!
However, I found videobalance element works as suggested by Millind Deore. But videobalance is causing cpu usage to be too high and becoming bottleneck for streaming pipeline.
So, I tried with glcolorbalance which is same like videobalance but it is uses gpu for brightness conversion.
Here is my experiment goes :
If I use below pipeline then I can broadcast to youtube successfully:
gst-launch-1.0 filesrc location=Recorded_live_streaming_on__2018_01_20___13_56_33.219076__-0800.flv ! decodebin name=demux ! queue ! videorate ! video/x-raw,framerate=30/1 ! glupload ! glcolorconvert ! gldownload ! video/x-raw ! vaapih264enc dct8x8=true cabac=true rate-control=cbr bitrate=8192 keyframe-period=60 max-bframes=0 ! flvmux name=mux ! rtmpsink sync=true async=true location="rtmp://x.rtmp.youtube.com/XXXXX live=1" demux. ! queue ! progressreport ! audioconvert ! audiorate ! audioresample ! faac bitrate=128000 ! audio/mpeg,mpegversion=4,stream-format=raw ! mux.
error: XDG_RUNTIME_DIR not set in the environment.
libva info: VA-API version 0.39.0
libva info: va_getDriverName() returns 0
libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/i965_drv_video.so
libva info: Found init function __vaDriverInit_0_39
libva info: va_openDriver() returns 0
Setting pipeline to PAUSED ...
error: XDG_RUNTIME_DIR not set in the environment.
libva info: VA-API version 0.39.0
libva info: va_getDriverName() returns 0
libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/i965_drv_video.so
libva info: Found init function __vaDriverInit_0_39
libva info: va_openDriver() returns 0
Pipeline is PREROLLING ...
Got context from element 'vaapiencodeh264-0': gst.vaapi.Display=context, gst.vaapi.Display=(GstVaapiDisplay)"(GstVaapiDisplayGLX)\ vaapidisplayglx1";
Got context from element 'gldownloadelement0': gst.gl.GLDisplay=context, gst.gl.GLDisplay=(GstGLDisplay)"(GstGLDisplayX11)\ gldisplayx11-0";
Redistribute latency...
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
progressreport0 (00:00:05): 4 / 1984 seconds ( 0.2 %)
progressreport0 (00:00:10): 9 / 1984 seconds ( 0.5 %)
progressreport0 (00:00:15): 14 / 1984 seconds ( 0.7 %)
progressreport0 (00:00:20): 19 / 1984 seconds ( 1.0 %)
However, if I use glcolorbalance in this pipeline then it gives me following error and I cannot stream to youtube any more:
gst-launch-1.0 filesrc location=Recorded_live_streaming_on__2018_01_20___13_56_33.219076__-0800.flv ! decodebin name=demux ! queue ! videorate ! video/x-raw,framerate=30/1 ! glupload ! glcolorbalance ! glcolorconvert ! gldownload ! video/x-raw ! vaapih264enc dct8x8=true cabac=true rate-control=cbr bitrate=8192 keyframe-period=60 max-bframes=0 ! flvmux name=mux ! rtmpsink sync=true async=true location="rtmp://x.rtmp.youtube.com/XXXXX live=1" demux. ! queue ! progressreport ! audioconvert ! audiorate ! audioresample ! faac bitrate=128000 ! audio/mpeg,mpegversion=4,stream-format=raw ! mux.
error: XDG_RUNTIME_DIR not set in the environment.
libva info: VA-API version 0.39.0
libva info: va_getDriverName() returns 0
libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/i965_drv_video.so
libva info: Found init function __vaDriverInit_0_39
libva info: va_openDriver() returns 0
Setting pipeline to PAUSED ...
error: XDG_RUNTIME_DIR not set in the environment.
libva info: VA-API version 0.39.0
libva info: va_getDriverName() returns 0
libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/i965_drv_video.so
libva info: Found init function __vaDriverInit_0_39
libva info: va_openDriver() returns 0
Pipeline is PREROLLING ...
Got context from element 'vaapiencodeh264-0': gst.vaapi.Display=context, gst.vaapi.Display=(GstVaapiDisplay)"(GstVaapiDisplayGLX)\ vaapidisplayglx1";
Got context from element 'gldownloadelement0': gst.gl.GLDisplay=context, gst.gl.GLDisplay=(GstGLDisplay)"(GstGLDisplayX11)\ gldisplayx11-0";
Redistribute latency...
WARNING: from element /GstPipeline:pipeline0/GstDecodeBin:demux: Delayed linking failed.
Additional debug info:
./grammar.y(510): gst_parse_no_more_pads (): /GstPipeline:pipeline0/GstDecodeBin:demux:
failed delayed linking some pad of GstDecodeBin named demux to some pad of GstQueue named queue0
^Chandling interrupt.
Interrupt: Stopping pipeline ...
ERROR: pipeline doesn't want to preroll.
Setting pipeline to NULL ...
Freeing pipeline ...
It seems like glcolorbalanc is causing decodebin not to link with vaapih264enc since it is the only difference between above pipeline.
I am new to gstreamer and Can any one tell me what is wrong with 2nd pipeline and why linking is failing ?

GStreamer element avdec_h264 fails to instantiate

I'm trying to build the following, well working pipeline in c code:
gst-launch-1.0 -v udpsrc port=5000 \
! application/x-rtp,payload=96,media=video,clock-rate=90000,encoding-name=H264,sprop-parameter-sets=\"J2QAH6wrQCIC3y8A8SJq\\,KO4CXLA\\=\" \
! rtph264depay ! avdec_h264 \
! videoconvert ! autovideosink sync=false
Following the tutorial I instantiated the needed elements and checked if they have been created:
GstElement *pipeline, *source, *depay, *decode, *videoconvert, *sink;
// Initialize GStreamer
gst_init (NULL, NULL);
// Create the elements
source = gst_element_factory_make("udpsrc", "source");
depay = gst_element_factory_make("rtph264depay", "depay");
decode = gst_element_factory_make("avdec_h264", "decode");
videoconvert = gst_element_factory_make("videoconvert", "videoconvert");
sink = gst_element_factory_make("autovideosink", "sink");
// Create the empty pipeline
pipeline = gst_pipeline_new ("pipeline");
//Check if all elements have been created
if (!pipeline || !source || !depay || !decode || !videoconvert || !sink) {
g_printerr ("Not all elements could be created.\n");
return -1;
}
The code compiles successfully, the output when executing however is "Not all elements could be created." Further tries made me find the decoder element to be not created.
Where is my mistake? Why is the decoder element not created?
Im on OS X using gcc in the eclipse environment. Include path and linker flag is set.
Update:
Running the code with GST_DEBUG=3 as suggested by max taldykin outputs
0:00:00.011208000 5624 0x7f863a411600 INFO GST_ELEMENT_FACTORY gstelementfactory.c:467:gst_element_factory_make: no such element factory "avdec_h264"!
0:00:00.011220000 5624 0x7f863a411600 INFO GST_ELEMENT_FACTORY gstelementfactory.c:467:gst_element_factory_make: no such element factory "videoconvert"!

Resources