How do i communicate with the encoder in gstreamer - c

I've just started learning gstreamer and I'm having some troubles communicating with the x264 encoder.
I'm trying to stream a my screen from one device to another using the x264 encoder. However my network is pretty unstable so sometimes i miss an IDR frame. This means basically no video for at least 2s until the next IDR-frame is received.
I'm trying to implement some way my device can ask gstreamer to generate an IDR-frame as soon as one is lost.
Here is a brief summary of the code so far:
#include <gst/gst.h>
int main(int argc, char* argv[])
{
GstElement* pipeline;
GstElement* videosource, * videoconverter, * videoencoder, * videosink;
GstBus* bus;
GstMessage* msg;
GstStateChangeReturn ret;
gst_init(&argc, &argv);
videosource = gst_element_factory_make("dxgdiscreencapsrc", "video_source");
videoconverter = gst_element_factory_make("autovideoconvert", "video_converter");
videoencoder = gst_element_factory_make("x264enc", "video_encoder");
videosink = gst_element_factory_make("udpsink", "video_sink");
// create the pipeline
pipeline = gst_pipeline_new("video_pipeline");
if (!pipeline || !videosource || !videoconverter, !videoencoder || !videosink) {
g_printerr("All elements could not be created");
gst_object_unref(pipeline);
return -1;
}
// build pipeline
gst_bin_add_many(GST_BIN(pipeline), videosource, videoconverter, videoencoder, videosink,
NULL);
// link elements
if (gst_element_link_many(videosource, videoconverter, videoencoder, videosink, NULL) != TRUE) {
g_printerr("Elements could not be linked.\n");
gst_object_unref(pipeline);
return -1;
}
// modify properties
g_object_set(videoencoder,
"threads", 8,
"quantizer", 21, // quantizer must be set to where bitrate is around desired value
"bitrate", 4800, // kbit/s
"vbv-buf-capacity", 5000, // the more it is the higher the bitrate fluctuations. set to around bitrate or bitrate/framerate for more control
"tune", 0x4, // zerolatency
"speed-preset", 2, // superfast
"key-int-max", 120, // ideally twice the framerate
NULL);
g_object_set(videosink,
"host", "REDACTED",
"port", 1234,
NULL);
// start playing
ret = gst_element_set_state(pipeline, GST_STATE_PLAYING);
if (ret == GST_STATE_CHANGE_FAILURE) {
g_printerr("Unable to set the pipeline to the playing state.\n");
gst_object_unref(pipeline);
return -1;
}
... read bus and cleanup afterwards (same as those in gstreamer tutorials)
How do i force the next frame to be an IDR-frame? Is there any way to directly access the underlying encoder parameters to set the X264_TYPE_IDR flag?
Thanks for your time!

Related

Cannot move camera using libuvc

I'm trying to control my camera using libuvc.
I tried this code I modified from the example:
#include <libuvc/libuvc.h>
#include <stdio.h>
#include <unistd.h>
int main() {
uvc_context_t *ctx;
uvc_device_t *dev;
uvc_device_handle_t *devh;
uvc_stream_ctrl_t ctrl;
uvc_error_t res;
/* Initialize a UVC service context. Libuvc will set up its own libusb
* context. Replace NULL with a libusb_context pointer to run libuvc
* from an existing libusb context. */
res = uvc_init(&ctx, NULL);
if (res < 0) {
uvc_perror(res, "uvc_init");
return res;
}
puts("UVC initialized");
/* Locates the first attached UVC device, stores in dev */
res = uvc_find_device(
ctx, &dev,
0, 0, NULL); /* filter devices: vendor_id, product_id, "serial_num" */
if (res < 0) {
uvc_perror(res, "uvc_find_device"); /* no devices found */
} else {
puts("Device found");
/* Try to open the device: requires exclusive access */
res = uvc_open(dev, &devh);
if (res < 0) {
uvc_perror(res, "uvc_open"); /* unable to open device */
} else {
puts("Device opened");
uvc_print_diag(devh, stderr);
//uvc_set_pantilt_abs(devh, 100, 100);
int result = uvc_set_pantilt_abs(devh, 5, 50);
printf("%d\n", result);
//sleep(5);
/* Release our handle on the device */
uvc_close(devh);
puts("Device closed");
}
/* Release the device descriptor */
uvc_unref_device(dev);
}
/* Close the UVC context. This closes and cleans up any existing device handles,
* and it closes the libusb context if one was not provided. */
uvc_exit(ctx);
puts("UVC exited");
return 0;
}
I tried both uvc_set_pantilt_abs and uvc_set_pantilt_rel and both are returning 0 so it means the action is successful. Except the camera does not move.
I'm sure the camera uses UVC because uvc_print_diag indicates
VideoControl:
bcdUVC: 0x0110
Am I doing something wrong? If not how can I troubleshoot it?
I found the answer a while ago but forgot to put it here.
I stumbled upon this project which controls a camera using a commandline tool with libuvc.
After playing a bit with it and compared it with my code I got what I did wrong. He was getting the pantilt data from the camera and then using it to send requests. It seems cameras need to receive a number which must be a multiple of the "step" provided by the camera as the movement unit.
Here's the part where he requests the pantilt information:
int32_t pan;
int32_t panStep;
int32_t panMin;
int32_t panMax;
int32_t tilt;
int32_t tiltStep;
int32_t tiltMin;
int32_t tiltMax;
// get current value
errorCode = uvc_get_pantilt_abs(devh, &pan, &tilt, UVC_GET_CUR);
handleError(errorCode, "Failed to read pan/tilt settings - possibly unsupported by this camera?\n");
// get steps
errorCode = uvc_get_pantilt_abs(devh, &panStep, &tiltStep, UVC_GET_RES);
handleError(errorCode, "Failed to read pan/tilt settings - possibly unsupported by this camera?\n");
// get min
errorCode = uvc_get_pantilt_abs(devh, &panMin, &tiltMin, UVC_GET_MIN);
handleError(errorCode, "Failed to read pan/tilt settings - possibly unsupported by this camera?\n");
// get max
errorCode = uvc_get_pantilt_abs(devh, &panMax, &tiltMax, UVC_GET_MAX);
handleError(errorCode, "Failed to read pan/tilt settings - possibly unsupported by this camera?\n");
Here's the full code

How to get h264 frames via gstreamer

I'm familiar with ffmpeg, but not with GStreamer. I know how to get a H264 frame through ffmpeg, for example, I can get a H264 frame through AVPacket. But I don't know how to use GStreamer to get a frame of h264. I don't intend to save the H264 data directly as a local file because I need to do other processing. Can anyone give me some sample code? I'll be very grateful. Here's what I learned from other people's code.
#include <stdio.h>
#include <string.h>
#include <fstream>
#include <unistd.h>
#include <gst/gst.h>
#include <gst/app/gstappsrc.h>
typedef struct {
GstPipeline *pipeline;
GstAppSrc *src;
GstElement *filter1;
GstElement *encoder;
GstElement *filter2;
GstElement *parser;
GstElement *qtmux;
GstElement *sink;
GstClockTime timestamp;
guint sourceid;
} gst_app_t;
static gst_app_t gst_app;
int main()
{
gst_app_t *app = &gst_app;
GstStateChangeReturn state_ret;
gst_init(NULL, NULL); //Initialize Gstreamer
app->timestamp = 0; //Set timestamp to 0
//Create pipeline, and pipeline elements
app->pipeline = (GstPipeline*)gst_pipeline_new("mypipeline");
app->src = (GstAppSrc*)gst_element_factory_make("appsrc", "mysrc");
app->filter1 = gst_element_factory_make ("capsfilter", "myfilter1");
app->encoder = gst_element_factory_make ("omxh264enc", "myomx");
app->filter2 = gst_element_factory_make ("capsfilter", "myfilter2");
app->parser = gst_element_factory_make("h264parse" , "myparser");
app->qtmux = gst_element_factory_make("qtmux" , "mymux");
app->sink = gst_element_factory_make ("filesink" , NULL);
if( !app->pipeline ||
!app->src || !app->filter1 ||
!app->encoder || !app->filter2 ||
!app->parser || !app->qtmux ||
!app->sink ) {
printf("Error creating pipeline elements!\n");
exit(2);
}
//Attach elements to pipeline
gst_bin_add_many(
GST_BIN(app->pipeline),
(GstElement*)app->src,
app->filter1,
app->encoder,
app->filter2,
app->parser,
app->qtmux,
app->sink,
NULL);
//Set pipeline element attributes
g_object_set (app->src, "format", GST_FORMAT_TIME, NULL);
GstCaps *filtercaps1 = gst_caps_new_simple ("video/x-raw",
"format", G_TYPE_STRING, "I420",
"width", G_TYPE_INT, 1280,
"height", G_TYPE_INT, 720,
"framerate", GST_TYPE_FRACTION, 1, 1,
NULL);
g_object_set (G_OBJECT (app->filter1), "caps", filtercaps1, NULL);
GstCaps *filtercaps2 = gst_caps_new_simple ("video/x-h264",
"stream-format", G_TYPE_STRING, "byte-stream",
NULL);
g_object_set (G_OBJECT (app->filter2), "caps", filtercaps2, NULL);
g_object_set (G_OBJECT (app->sink), "location", "output.h264", NULL);
//Link elements together
g_assert( gst_element_link_many(
(GstElement*)app->src,
app->filter1,
app->encoder,
app->filter2,
app->parser,
app->qtmux,
app->sink,
NULL ) );
//Play the pipeline
state_ret = gst_element_set_state((GstElement*)app->pipeline, GST_STATE_PLAYING);
g_assert(state_ret == GST_STATE_CHANGE_ASYNC);
//Get a pointer to the test input
FILE *testfile = fopen("test.yuv", "rb");
g_assert(testfile != NULL);
//Push the data from buffer to gstpipeline 100 times
for(int i = 0; i < 100; i++) {
char* filebuffer = (char*)malloc (1382400); //Allocate memory for framebuffer
if (filebuffer == NULL) {printf("Memory error\n"); exit (2);} //Errorcheck
size_t bytesread = fread(filebuffer, 1 , (1382400), testfile); //Read to filebuffer
//printf("File Read: %zu bytes\n", bytesread);
GstBuffer *pushbuffer; //Actual databuffer
GstFlowReturn ret; //Return value
pushbuffer = gst_buffer_new_wrapped (filebuffer, 1382400); //Wrap the data
//Set frame timestamp
GST_BUFFER_PTS (pushbuffer) = app->timestamp;
GST_BUFFER_DTS (pushbuffer) = app->timestamp;
GST_BUFFER_DURATION (pushbuffer) = gst_util_uint64_scale_int (1, GST_SECOND, 1);
app->timestamp += GST_BUFFER_DURATION (pushbuffer);
//printf("Frame is at %lu\n", app->timestamp);
ret = gst_app_src_push_buffer( app->src, pushbuffer); //Push data into pipeline
g_assert(ret == GST_FLOW_OK);
}
usleep(100000);
//Declare end of stream
gst_app_src_end_of_stream (GST_APP_SRC (app->src));
printf("End Program.\n");
return 0;
}
Here is a link to the source of the code
link
Your example serves for the purpose of feeding the data from application to the GStreamer with a hope to encode with x264 and the result goes to file.
What you need (I am guessing here) is to read data from file - lets say movie.mp4 and get the decoded data into your application (?)
I believe you have two options:
1, Use appsink instead of filesink and feed the data from file using filesrc. So if you need also other processing beside grabbing the h264 frames (like playing or sending via network), you would have to use tee to split the pipeline into two output branches like example gst-launch below. One branch of output pipeline would go to for example windowed output - autovideosink and the other part would go to your application.
To demonstrate this split and still show you what is really happening I will use debugging element identity which is able to dump data which goes through it.
This way you will learn to use this handy tool for experiments and verification that you know what you are doing. This is not the solution you need.
gst-launch-1.0 -q filesrc location= movie.mp4 ! qtdemux name=qt ! video/x-h264 ! h264parse ! tee name=t t. ! queue ! avdec_h264 ! videoconvert ! autovideosink t. ! queue ! identity dump=1 ! fakesink sync=true
This pipeline plays the video into window (autovideosink) and the other branch of tee goes to the debugging element called identity which dumps the frame in hexdump manner (with addresses, character representation and everything).
So what you see in the stdout of the gst-launch are actual h264 frames (but you do not see boundaries or anything.. its just really raw dump).
To understand the gst-launch syntax (mainly the aliases with name=) check this part of the documentation.
In real code you would not use identity and fakesink but instead you would link there just appsink and connect the appsink signals to callbacks in your C source code.
There are nice examples for this, I will not attempt to give you complete solution. This example demonstrate how to get samples out of appsink.
The important bits are:
/* The appsink has received a buffer */
static GstFlowReturn new_sample (GstElement *sink, CustomData *data) {
GstSample *sample;
/* Retrieve the buffer */
g_signal_emit_by_name (sink, "pull-sample", &sample);
if (sample) {
/* The only thing we do in this example is print a * to indicate a received buffer */
g_print ("*");
gst_sample_unref (sample);
return GST_FLOW_OK;
}
return GST_FLOW_ERROR;
}
// somewhere in main()
// construction and linkage of elements
g_signal_connect (data.app_sink, "new-sample", G_CALLBACK (new_sample), &data);
2, Second solution is to use pad probe registered for buffers only. Pad probe is a way to register a callback on any pad of any element in pipeline and tell GStreamer in what information are you interested in on that probe. You can ask it to call the callback upon every event, or any downstream event, or on any buffer going through that probe. In the callback which pad probe calls you will extract the buffer and the actuall data in that buffer.
Again there are many examples on how to use pad probes.
One very nice example containing logic of almost exactly what you need can be found here
The important bits:
static GstPadProbeReturn
cb_have_data (GstPad *pad,
GstPadProbeInfo *info,
gpointer user_data)
{
// ... the code for writing the buffer data somewhere ..
}
// ... later in main()
pad = gst_element_get_static_pad (src, "src");
gst_pad_add_probe (pad, GST_PAD_PROBE_TYPE_BUFFER,
(GstPadProbeCallback) cb_have_data, NULL, NULL);

Binary Semaphore to synchronize an Interrupt with a task in FreeRTOS

Hello everyone im doing my first steps with RTOS. Im trying to receive an amount of data using UART in an interrupt mode. I have a Display Task where the commands are being written to a global buffer, and i just created a UART Handler Task where i want to read the bytes. The problems im facing are.
The semaphore i use inside the UART Task is unknown, even though i declared it global in the main function, so the xSemaphoreTake() function has errors there. Maybe a helpful Note: the UART Task is in a seperated file.
Is my implemntation of the HAL_UART_RxCpltCallback and the UART Task clean?
here is the code i wrote:
SemaphoreHandle_t uartInterruptSemaphore = NULL;
int main(void)
{
/* USER CODE BEGIN 1 */
void mainTask(void* param) {
uartInterruptSemaphore = xSemaphoreCreateBinary();
if(uartInterruptSemaphore != NULL) {
// Display Thread with a 2 priority
xTaskCreate(&displayTask, "Display Thread", 1000, &huart4, 2, NULL);
// deferred Interrupt to be synchronized with the Display Task, must have a higher priority than the display task
xTaskCreate(&UartHandlerTask, "UART Handler Task", 1000, &huart4, 3, NULL);
}
for(;;){
}
}
the callback function i wrote:
void HAL_UART_RxCpltCallback(UART_HandleTypeDef *uart_cb) {
BaseType_t xHigherPriorityTaskWoken = pdFALSE;
if(uart_cb->Instance == USART4) {
xSemaphoreGiveFromISR(uartInterruptSemaphore, &xHigherPriorityTaskWoken);
}
portEND_SWITCHING_ISR(xHigherPriorityTaskWoken);
}
and the handler task:
void UartHandlerTask(void* param) {
huart_cache = param;
const uint8_t tmp = rx_byte; //rx byte is global volatile variable
for(;;){
if(xSemaphoreTake(uartInterruptSemaphore, portMAX_DELAY) == pdPASS) {
HAL_UART_Receive_IT((UART_HandleTypeDef *)huart_cache, (uint8_t *)&rx_byte, 1);
// write data to the buffer
RX_interrupt(tmp);
}
}
}
I would recommend getting a better handle on C before trying to use an RTOS. This will also show you a better way of unblocking a task form an interrupt than using a binary semaphore: https://www.freertos.org/2020/09/decrease-ram-footprint-and-accelerate-execution-with-freertos-notifications.html

gstreamer 1.14.5 multiple rtspsrc element pipeline, reconnect individual streams when disconnected via 'C' code

Hello GStreamer community & fans,
I have a working pipeline that connects to multiple H.264 IP camera streams using multiple rtspsrc elements aggregated into a single pipeline for downstream video processing.
Intermittently & randomly, streams coming in from remote & slower connections will have problems, timeout, retry and go dead, leaving that stream with a black image when viewing the streams post processing. The other working streams continue to process normally. The rtspsrc elements are setup to retry the rtsp connection, and that seems to somewhat work, but for when it doesn't, I'm looking for a way to disconnect the stream entirely from the rtspsrc element and restart that particular stream without disrupting the other streams.
I haven't found any obvious examples or ways to accomplish this, so I've been tinkering with the rtspsrc element code itself using this public function to access the rtspsrc internals that handle connecting.
__attribute__ ((visibility ("default"))) GstRTSPResult my_gst_rtspsrc_conninfo_reconnect(GstRTSPSrc *, gboolean);
GstRTSPResult
my_gst_rtspsrc_conninfo_reconnect(GstRTSPSrc *src, gboolean async)
{
int retries = 0, port = 0;
char portrange_buff[32];
// gboolean manual_http;
GST_ELEMENT_WARNING(src, RESOURCE, READ, (NULL),
(">>>>>>>>>> Streamer: A camera closed the streaming connection. Trying to reconnect"));
gst_rtspsrc_set_state (src, GST_STATE_PAUSED);
gst_rtspsrc_set_state (src, GST_STATE_READY);
gst_rtspsrc_flush(src, TRUE, FALSE);
// manual_http = src->conninfo.connection->manual_http;
// src->conninfo.connection->manual_http = TRUE;
gst_rtsp_connection_set_http_mode(src->conninfo.connection, TRUE);
if (gst_rtsp_conninfo_close(src, &src->conninfo, TRUE) == GST_RTSP_OK)
{
memset(portrange_buff, 0, sizeof(portrange_buff));
g_object_get(G_OBJECT(src), "port-range", portrange_buff, NULL);
for (retries = 0; portrange_buff[retries] && isdigit(portrange_buff[retries]); retries++)
port = (port * 10) + ((int)(portrange_buff[retries]) + 48);
if (port != src->client_port_range.min)
GST_ELEMENT_WARNING(src, RESOURCE, READ, (NULL), (">>>>>>>>>> Streamer: port range start mismatch"));
GST_WARNING_OBJECT(src, ">>>>>>>>>> Streamer: old port.min: %d, old port.max: %d, old port-range: %s\n", (src->client_port_range.min), (src->client_port_range.max), (portrange_buff));
src->client_port_range.min += 6;
src->client_port_range.max += 6;
src->next_port_num = src->client_port_range.min;
memset(portrange_buff, 0, sizeof(portrange_buff));
sprintf(portrange_buff, "%d-%d", src->client_port_range.min, src->client_port_range.max);
g_object_set(G_OBJECT(src), "port-range", portrange_buff, NULL);
for (retries = 0; retries < 5 && gst_rtsp_conninfo_connect(src, &src->conninfo, async) != GST_RTSP_OK; retries++)
sleep(10);
}
if (retries < 5)
{
gst_rtspsrc_set_state(src, GST_STATE_PAUSED);
gst_rtspsrc_set_state(src, GST_STATE_PLAYING);
return GST_RTSP_OK;
}
else return GST_RTSP_ERROR;
}
I realize this is probably not best practice and I'm doing this to find a better way once I understand the internals better through this learning experience.
I appreciate any feedback anyone has to this problem.
-Doug

Capture video from camera in Mac OS X

How I can filter video stream from camera in MacOS X. I write quicktime sequence grabber channel component, but it`s work only if app used SG API. If app used QTKit Capture the component is not worked.
Somebody know how I can implement it?
You could use OpenCV for video processing, it's a cross platform image/video processing library: http://opencv.willowgarage.com
Your code would look something like this:
CvCapture* capture = NULL;
if ((capture = cvCaptureFromCAM(-1)) == NULL)
{
std::cerr << "!!! ERROR: vCaptureFromCAM No camera found\n";
return -1;
}
cvNamedWindow("webcam", CV_WINDOW_AUTOSIZE);
cvMoveWindow("webcam", 50, 50);
cvQueryFrame(capture);
IplImage* src = NULL;
for (;;)
{
if ((src = cvQueryFrame(capture)) == NULL)
{
std::cerr << "!!! ERROR: vQueryFrame\n";
break;
}
// perform processing on src->imageData
cvShowImage("webcam", &src);
char key_pressed = cvWaitKey(2);
if (key_pressed == 27)
break;
}
cvReleaseCapture(&camera);
I had success using OpenCV on Mac OS X using cvCaptureFromCAM(0) instead of passing it -1. On linux, -1 seems to do Ok.
It looks like there should be cvReleaseCapture(&capture); at the end.

Resources