Reconnect RTSP stream in Gstreamer pipeline - c

I have a working Gstreamer pipeline using RTSP input streams. To handle these given RTSP input streams, the uridecobin element is used.
My goal is to reconnect to the RTSP input streams when internet connection is unstable.
When the internet connection is down for only few seconds and then it is up, then the pipeline starts to receive the frames again and everything is ok. When the internet connection is down for >20 seconds I get GST_MESSAGE_EOS. I tried to find some timeout variable in every element generated by uridecodebin, but I did not find it. Do you have any hint which element has this timeout variable and how to set it?
If it is not possible to set such timeout variable, is there any way to block GST_MESSAGE_EOS? Because when I receive GST_MESSAGE_EOS in bus, I try to remove uridecodebin from the pipeline and create a new one. But it does not work for me when GST_MESSAGE_EOS is received (When I try to remove uridecodebin from the pipeline and create a new one during normal state, it works).

I found the way how to block GST_MESSAGE_EOS.
Create the following function to drop GST_EVENT_EOS:
GstPadProbeReturn eos_probe_cb(GstPad *pad, GstPadProbeInfo *info, gpointer u_data)
{
if (GST_EVENT_TYPE(GST_PAD_PROBE_INFO_DATA(info)) == GST_EVENT_EOS)
{
return GST_PAD_PROBE_DROP;
}
return GST_PAD_PROBE_OK;
}
And then just add this function to some GstPad of your elements:
gst_pad_add_probe(src_pad, GST_PAD_PROBE_TYPE_EVENT_DOWNSTREAM, eos_probe_cb, (gpointer) user_data, NULL);

Related

Get a GstBuffer/GstMemory/GstSample from an element in running pipeline

I need to randomly access image data in a running pipeline. Something like gst_base_sink_get_last_sample() but not for a sink element placed at the end of the pipeline. I need to inspect passing data in the middle of the pipeline while running, for example inspect the input buffer to the glupload. I also can not add tee to the pipeline to make a fork and send that to fakesink as tee has overheads of data copy for each frame. Is there any method I can use to extract current buffer/memory/sample of data in a PLAYING element in gstreamer pipeline?
Pipeline:
Thanks to #FlorianZwoch, I made it using pad probe.
Every time I need to get access to data to probe it, I add a probe to any pad I want like this:
GstPad* pad = gst_element_get_static_pad(v4l2convert, "src");
gst_pad_add_probe(pad, GST_PAD_PROBE_TYPE_BUFFER, probe_callback, NULL, NULL);
gst_object_unref(pad);
and then in the callback function I can access data using these lines of code:
GstPadProbeReturn Gstreamer::probe_callback(
GstPad* pad, GstPadProbeInfo* info, gpointer user_data)
{
Q_UNUSED(user_data);
gst_println("callback called");
GstBuffer* buffer = gst_pad_probe_info_get_buffer(info); // No need to unref later [see docs]
GstCaps* caps = gst_pad_get_current_caps(pad);
if (!buffer || !caps)
{
g_printerr("Probe callback failed to fetch valid data.");
goto label_return;
}
// Consume data here
return GST_PAD_PROBE_REMOVE;
}

Gstreamer restart EOS element

I'd like to loop a file using GStreamer.
Gst.Element playbin = Gst.ElementFactory.make ("uridecodebin", null);
I do this by adding a probe to the playbin's src pad, and listen for EOS messages. Whenever one comes, I repeat the stream by seeking back to the beginning.
Gst.Pad srcpad = playbin.get_request_pad("src_%u");
srcpad.add_probe(Gst.PadProbeType.EVENT_DOWNSTREAM, (pad, info) => {
Gst.Event? event = info.get_event();
if (event != null)
{
if (event.type == Gst.EventType.EOS
|| event.type == Gst.EventType.SEGMENT_DONE)
{
var element = pad.get_parent_element();
element.seek(1.0, Gst.Format.TIME, Gst.SeekFlags.SEGMENT, Gst.SeekType.SET, 0, Gst.SeekType.NONE, 0);
return Gst.PadProbeReturn.HANDLED;
}
}
return Gst.PadProbeReturn.OK;
});
However, when I catch the EOS and seek back to the beginning, I get this error:
wavparse gstwavparse.c:2195:gst_wavparse_stream_data:<wavparse0> Error pushing on srcpad wavparse0:src, reason eos, is linked? = 1
How do I get my playbin element back out of the EOS state so that it can play from the place I seeked to?
I'd like to avoid listening to the pipeline bus because it's quite a complex application and the playbin is quite a few Bins deep.
Admittedly, my testing was performed with python, rather than C but I see no reason why the logic should be different.
Set the player's pipeline state to Gst.State.NULL, wait for a tenth of a second or so for the pipeline to reach that state, then simply play it again, as if starting from scratch after loading it. Normally, by simply setting the pipeline state to Gst.State.PLAYING.
Note:
As far as I'm aware, only local files can be pre-rolled i.e. do the seek and then play, thus if the file is not local you must play and then seek. Always wait for the pipeline to reach the desired state, before the next operation.
To pre-roll a local file, set the pipeline state to paused, then perform the seek, finally set it to playing, always waiting for the previous operation to finish before moving on to the next.

CANOPEN SYNC timeout after enable Operation

I am a newbie in CANOPEN. I wrote a program that read actual position via PDO1 (default is statusword + actual position).
void canopen_init() {
// code1 setup PDO mapping
nmtPreOperation();
disablePDO(PDO_TX1_CONFIG_COMM);
setTransmissionTypePDO(PDO_TX1_CONFIG_COMM, 1);
setInhibitTimePDO(PDO_TX1_CONFIG_COMM, 0);
setEventTimePDO(PDO_TX1_CONFIG_COMM, 0);
enablePDO(PDO_TX1_CONFIG_COMM);
setCyclePeriod(1000);
setSyncWindow(100);
//code 2: enable OPeration
readyToSwitchOn();
switchOn();
enableOperation();
motionStart();
// code 3
nmtActiveNode();
}
int main (void) {
canopen_init();
while {
delay_ms(1);
send_sync();
}
}
If I remove "code 2" (the servo is in Switch_on_disable status), i can read position each time sync send. But if i use "code 2", the driver has error "sync frame timeout". I dont know driver has problem or my code has problem. Does my code has problem? thank you!
I don't know what protocol stack this is or how it works, but these:
setCyclePeriod(1000);
setSyncWindow(100);
likely correspond to these OD entries :
Object 1006h: Communication cycle period (CiA 301 7.5.2.6)
Object 1007h: Synchronous window length (CiA 301 7.5.2.7)
They set the SYNC interval and time window for synchronous PDOs respectively. The latter is described by the standard as:
If the synchronous window length expires all synchronous TPDOs may be discarded and an EMCY message may be transmitted; all synchronous RPDOs may be discarded until the next SYNC message is received. Synchronous RPDO processing is resumed with the next SYNC message.
Now if you set this sync time window to 100us but have a sloppy busy-wait delay delay_ms(1), then that doesn't add up. If you write zero to Object 1007h, you disable the sync window feature. I suppose setSyncWindow(0); might do that. You can try to do that to see if that's the issue. If so, you have to drop your busy-wait in favour for proper hardware timers, one for the SYNC period and one for PDO timeout (if you must use that feature).
Problem fixed. Due to much EMI from servo, that make my controller didn't work properly. After isolating, it worked very well :)!

Streaming OGG Flac to Icecast with libshout

I have a simple streamer developed in C++ using LibFlac and LibShout to stream to Icecast server.
The flac Encoder is created in the following way:
m_encoder = FLAC__stream_encoder_new();
FLAC__stream_encoder_set_channels(m_encoder, 2);
FLAC__stream_encoder_set_ogg_serial_number(m_encoder, rand());
FLAC__stream_encoder_set_bits_per_sample(m_encoder, 16);
FLAC__stream_encoder_set_sample_rate(m_encoder, in_samplerate);
FLAC__stream_encoder_init_ogg_stream(m_encoder, NULL, writeByteArray, NULL, NULL, NULL, this);
The function writeByteArray sends encoded data to Icecast using shout_send_raw function from libshout.
shout_send_raw returns an actual number of bytes being sent, so I assume that it works as it should, no error is happening.
The problem is Icecast server does not stream the data that I send. I see followint in the log:
[2018-02-15 15:31:47] DBUG stats/modify_node_event update "/radio" total_bytes_read (20735897)
[2018-02-15 15:31:47] DBUG stats/modify_node_event update "/radio" total_bytes_sent (0)
I see that Icecast receives the data, but it does not send it to connected clients. The mount point is radio and when I try to connect to that mount using any media player - it does nothing, no playback.
So my question is how is that possible that Icecast receives the data but does not send it to connected clients?
Maybe some additional libshout configuration is required, here is how I configure it:
shout_set_format( m_ShoutData, SHOUT_FORMAT_OGG_AUDIO );
shout_set_mime( m_ShoutData, "application/ogg" );
Any help would be appreciated.
To sum up the solution from the comments:
FLAC has a very significantly higher bitrate than any other commonly used audio codec. Thus the default settings will NOT work. The queue size must be increased significantly so that complete data frames will fit in it, otherwise, Icecast will not sync on the stream and refuse to data out to clients.
The same obviously applies to streaming video. The queue size must be adjusted either for the appropriate mountpoints or globally.

Video packet capture over multiple IP cameras

We are working on a C language application which is simple RTSP/RTP client to record video from Axis a number of Cameras. We launch a pthread for each of the camera which establishes the RTP session and begin to record the packets captured using the recvfrom() call.
A single camera single pthread records fine for well over a day without issues.
But testing with more cameras available,about 25(so 25 pthreads), the recording to file goes fine for like 15 to 20 mins and then the recording just stops. The application still keeps running. Its been over a month and a half we have been trying with varied implementations but nothing seems to help. Please provide suggestions.
We are using CentOS 5 platform
Define "record" Does that mean write data to a file? How do you control access to the file?
You can't have several threads all trying to write at the exact same time. So the comment by Alon seems to be pertinent. Your write access control machanism has problems.
void *IPThread(void *ptr)
{
//Establish RTSP session
//Bind to RTP ports(video)
//Increase Socket buffer size to 625KB
record_fd=open(record_name, O_CREAT|O_RDWR|O_TRUNC, 0777);
while(1)
{
if(poll(RTP/RTCP ports)) //a timeout value of 1
{
if(RTCP event)
RTCPhandler();
if(RTP event)
{
recvfrom(); //the normal socket api recvfrom
WritePacketToFile(record_fd)
{
//Create new record_fd after 100MB
}
}
}
}
}
even if it is alright to stick to the single threaded implementation why is the multithreaded approach behaving such a way(not recording after ~15 mins)..?

Resources