Video packet capture over multiple IP cameras - c

We are working on a C language application which is simple RTSP/RTP client to record video from Axis a number of Cameras. We launch a pthread for each of the camera which establishes the RTP session and begin to record the packets captured using the recvfrom() call.
A single camera single pthread records fine for well over a day without issues.
But testing with more cameras available,about 25(so 25 pthreads), the recording to file goes fine for like 15 to 20 mins and then the recording just stops. The application still keeps running. Its been over a month and a half we have been trying with varied implementations but nothing seems to help. Please provide suggestions.
We are using CentOS 5 platform

Define "record" Does that mean write data to a file? How do you control access to the file?
You can't have several threads all trying to write at the exact same time. So the comment by Alon seems to be pertinent. Your write access control machanism has problems.

void *IPThread(void *ptr)
{
//Establish RTSP session
//Bind to RTP ports(video)
//Increase Socket buffer size to 625KB
record_fd=open(record_name, O_CREAT|O_RDWR|O_TRUNC, 0777);
while(1)
{
if(poll(RTP/RTCP ports)) //a timeout value of 1
{
if(RTCP event)
RTCPhandler();
if(RTP event)
{
recvfrom(); //the normal socket api recvfrom
WritePacketToFile(record_fd)
{
//Create new record_fd after 100MB
}
}
}
}
}
even if it is alright to stick to the single threaded implementation why is the multithreaded approach behaving such a way(not recording after ~15 mins)..?

Related

Streaming OGG Flac to Icecast with libshout

I have a simple streamer developed in C++ using LibFlac and LibShout to stream to Icecast server.
The flac Encoder is created in the following way:
m_encoder = FLAC__stream_encoder_new();
FLAC__stream_encoder_set_channels(m_encoder, 2);
FLAC__stream_encoder_set_ogg_serial_number(m_encoder, rand());
FLAC__stream_encoder_set_bits_per_sample(m_encoder, 16);
FLAC__stream_encoder_set_sample_rate(m_encoder, in_samplerate);
FLAC__stream_encoder_init_ogg_stream(m_encoder, NULL, writeByteArray, NULL, NULL, NULL, this);
The function writeByteArray sends encoded data to Icecast using shout_send_raw function from libshout.
shout_send_raw returns an actual number of bytes being sent, so I assume that it works as it should, no error is happening.
The problem is Icecast server does not stream the data that I send. I see followint in the log:
[2018-02-15 15:31:47] DBUG stats/modify_node_event update "/radio" total_bytes_read (20735897)
[2018-02-15 15:31:47] DBUG stats/modify_node_event update "/radio" total_bytes_sent (0)
I see that Icecast receives the data, but it does not send it to connected clients. The mount point is radio and when I try to connect to that mount using any media player - it does nothing, no playback.
So my question is how is that possible that Icecast receives the data but does not send it to connected clients?
Maybe some additional libshout configuration is required, here is how I configure it:
shout_set_format( m_ShoutData, SHOUT_FORMAT_OGG_AUDIO );
shout_set_mime( m_ShoutData, "application/ogg" );
Any help would be appreciated.
To sum up the solution from the comments:
FLAC has a very significantly higher bitrate than any other commonly used audio codec. Thus the default settings will NOT work. The queue size must be increased significantly so that complete data frames will fit in it, otherwise, Icecast will not sync on the stream and refuse to data out to clients.
The same obviously applies to streaming video. The queue size must be adjusted either for the appropriate mountpoints or globally.

Data lost during SPI communication with Polling mechanism

I am coding the communication between 2 DSPs through SPI. The start code is quite simple, DSP-1 is sending and DSP-2 is receiving (Of course, DSP-1 also receives but I don't care so far, vice versa for DSP-2)
That works fine. One thousand 16bit data were sent and received correctly.
However, when I add an random delay in DSP-1(master) side, I found DSP-2 begin to lost some data. It is confusing me that I didn't change anything at DSP-2 side for receiving and I am polling quite often.
So anyidea why the delay on sender's side might affect the receiver? (I double checked the DSP1 did send correct sequence.)
And I am thinking to convert to interrupt mechanism, will that solve this kind of issue for all?
my DSP2's polling code is:
for(;;) //my main program for receving
{
spi_xmit(data); //For sending, not care so far
while(SpiaRegs.SPIFFRX.bit.RXFFST == 0) {} //polling
while(SpiaRegs.SPIFFRX.bit.RXFFST != 0)
{
rdata[seq] = SpiaRegs.SPIRXBUF;
seq++;
}
if(seq>1000) break;
}

winforms: Reading from serialport and plotting real time data. Many errors/bugs

I'm trying to acquire data from an MCU, save them to a file and plot them. The code functions properly for some time, then just hangs randomly (sometimes after 1 sec, sometimes after 1 minute ...!). Also the serialport timeouts are not respected, i.e. I'm not receiving any timeout exceptions. I'm using an FTDI232RL chip. The only time I get a timeout exception is when I unplug it while the program is running.
Code:
private: System::Void START_Click(System::Object^ sender, System::EventArgs^ e) {
seconds=0;
minutes=0;
hours=0;
days=0;
t=0;
if((this->comboBox4->Text == String::Empty)||(this->textBox2->Text == String::Empty)||(this->textBox3->Text == String::Empty)){
this->textBox1->Text="please select port, save file directory and logging interval";
timer1->Enabled=false;
}
else{ // start assigning
w=Convert::ToDouble(this->textBox3->Text);
double q=fmod(w*1000,10);
if(q!=0){
MessageBox::Show("The logging interval must be a multiple of 0.01s");
}
else{
period=static_cast<int>(w*1000);
this->interval->Interval = period;
try{ // first make sure port isn't busy/open
if(!this->serialPort1->IsOpen){
// select the port whose name is in comboBox4 (select port)
this->serialPort1->PortName=this->comboBox4->Text;
//open the port
this->serialPort1->Open();
this->serialPort1->ReadTimeout = period+1;
this->serialPort1->WriteTimeout = period+1;
String^ name_ = this->serialPort1->PortName;
START=gcnew String("S");
this->textBox1->Text="Logging started";
timer1->Enabled=true;
interval->Enabled=true;
myStream=new ofstream(directory,ios::out);
*myStream<<"time(ms);ADC1;ADC2;ADC3;ADC4;ADC5;ADC6;ADC7;ADC8;";
*myStream<<endl;
chart1->Series["ADC1"]->Points->Clear();
chart1->Series["ADC2"]->Points->Clear();
chart1->Series["ADC3"]->Points->Clear();
chart1->Series["ADC4"]->Points->Clear();
chart1->Series["ADC5"]->Points->Clear();
chart1->Series["ADC6"]->Points->Clear();
chart1->Series["ADC7"]->Points->Clear();
chart1->Series["ADC8"]->Points->Clear();
backgroundWorker1->RunWorkerAsync();
}
else
{
this->textBox1->Text="Warning: port is busy or isn't open";
timer1->Enabled=false;
interval->Enabled=false;
}
}
catch(UnauthorizedAccessException^)
{
this->textBox1->Text="Unauthorized access";
timer1->Enabled=false;
interval->Enabled=false;
}
}
}
}
private: System::Void backgroundWorker1_DoWork(System::Object^ sender, System::ComponentModel::DoWorkEventArgs^ e) {
while(!backgroundWorker1->CancellationPending){
if(backgroundWorker1->CancellationPending){
e->Cancel=true;
return;
}
t+=period;
if(t<10*period){
this->chart1->ChartAreas["ChartArea1"]->AxisX->Minimum=0;
this->chart1->ChartAreas["ChartArea1"]->AxisX->Maximum=t+10*period;
}
else {
this->chart1->ChartAreas["ChartArea1"]->AxisX->Minimum=t-10*period;
this->chart1->ChartAreas["ChartArea1"]->AxisX->Maximum=t+10*period;
}
*myStream<<t<<";";
for (int n=0;n<8;n++){
adc_array[n]= this->serialPort1->ReadByte();
}
Array::Copy(adc_array,ADC,8);
for(int f=0; f<8; f++){
*myStream<<ADC[f]<<";";
}
*myStream<<endl;
backgroundWorker1->ReportProgress(t);
}
}
private: System::Void backgroundWorker1_ProgressChanged(System::Object^ sender, System::ComponentModel::ProgressChangedEventArgs^ e) {
chart1->Series["ADC1"]->Points->AddXY(t,ADC[0]);
chart1->Series["ADC2"]->Points->AddXY(t,ADC[1]);
chart1->Series["ADC3"]->Points->AddXY(t,ADC[2]);
chart1->Series["ADC4"]->Points->AddXY(t,ADC[3]);
chart1->Series["ADC5"]->Points->AddXY(t,ADC[4]);
chart1->Series["ADC6"]->Points->AddXY(t,ADC[5]);
chart1->Series["ADC7"]->Points->AddXY(t,ADC[6]);
chart1->Series["ADC8"]->Points->AddXY(t,ADC[7]);
}
the user is allowed to define intervals in seconds for data acquisition (in the code this interval is w after conversion to double). In this case, the program sends a pulse to the MCU requesting a new data transmission. So far, I have been testing this for 1 second intervals (note, during each interval the MCU sends 8 frames, each representing an ADC). However, I need to get this to run for 10ms intervals at some point. Will this be possible? Any idea on how to solve the few problems I mentioned at the beginning?
Thanks in advance
UPDATE
Just to give you an idea of what's happening:
I commented the charting part and ran the program for about 5 minutes, with a reading interval of 1s. So I expected to get around 5x60=300 values in the output file, but I only got 39 (i.e. starting from 1s till 39s). The program was still running, but the data were not getting stored anymore.
Testing was done in release mode and not debug mode. In debug mode, setting a break point under serialport->readbyte(), does not reproduce the problem. My guess is it's a timing issue between program and MCU.
You are making several standard mistakes. First off, do NOT unplug the cable when the port is opened. Many USB emulators don't know how to deal with that, the FTDI driver is particularly notorious about that. They just make the port disappear while it is in use, this invariably gives code that uses the port a severe heart attack. An uncatchable exception is common.
Secondly, you are accessing properties of a class that is not thread-safe in a worker thread. The Chart control was made to be used only in a UI thread, accessing the ChartAreas property in a worker is going to buy you a lot of misery. Getting an InvalidOperationException is pretty typical when you violate threading requirements, it is however not consistently implemented. Nastiness includes random AccessViolationExceptions, corrupted data and deadlock.
Third, you are setting completely unrealistic goals. Pursuing an update every 10 milliseconds is pointless, the human eye cannot perceive that. Anything past 50 milliseconds just turns into a blur. Something that is taken advantage of when you watch a movie in the cinema, it displays at 24 frames per second. The failure mode for that is unpleasant as well, you'll eventually reach a point where you are pummeling the UI thread (or the Chart control) with more updates than it can process. The side effect is that the UI stops painting itself, it is too busy trying to keep up with the deluge of invoke requests. And the amount of memory your program consumes keeps building, the update queue grows without bounds. That does eventually end with an OOM exception, it takes a while to consume 2 jiggabytes however. You will need to prevent this from happening, you need to throttle the rate at which you invoke. A simple thread-safe counter can take care of that.
Forth, you are accessing the data you gather in more than one thread without taking care of thread-safety. The ADC array content is being changed by the worker while the UI thread is reading it. Various amounts of misery from that, bad data at a minimum. A simply workaround is to pass a copy of the data to the ReportProgress method. In general, address these kind of threading problems by using pull instead of push. Get rid of the fire-hose problem by having the UI thread pace the requests instead of trying to have the UI thread keep up.

Want to Implement Timeout for one function in C

Here i have one function which is listen mode. this function listing something which i got form some device.
Here when my function is in listen mode that time i want to create timeout. if i will not get any response from particular device than i want o exit from this function and have to notify.
if during this timeout period if i will get response from device than i have to continue with work and stop this timeout and there is no limits to complete this work in any time duration.
So how can i implement this thing for a function.
Any body please can me help me to implement this thing with timeout functionality.
Depending on how you are waiting for a response from this device, the answer to your question will be different. The basic framework is:
int do_something_with_device()
{
if (!wait_for_response_from_device()) {
return TIMEOUT_ERROR;
}
// continue with processing
}
As for how you implement wait_for_response_from_device(), well, every device is different. If you're using sockets or pipes, use select(). If you're interfacing with something that requires a busy-wait loop, it might look like:
int wait_for_response_from_device()
{
time_t start = time(NULL);
while (time(NULL) - start < TIMEOUT) {
if (check_device_ready()) {
return 1;
}
}
return 0;
}
Naturally, the implementation of check_device_ready() would be up to you.
Take a look at man 2 alarm. You can set or disable signals which will be sent to your application after a certain time period elapses.

MJPEG internet streaming - accurate fps

I want to write MJPEG picture internet stream viewer. I think getting jpeg images using sockets it's not very hard problem. But i want to know how to make accurate streaming.
while (1)
{
get_image()
show_image()
sleep (SOME_TIME) // how to make it accurate?
}
Any suggestions would be great.
In order to make it accurate, there are two possibilities:
Using framerate from the streaming server. In this case, the client needs to keep the same framerate (calculate each time when you get frame, then show and sleep for a variable amount of time using feedback: if the calculated framerate is higher than on server -> sleep more; if lower -> sleep less; then, the framerate on the client side will drift around the original value from server). It can be received from server during the initialization of streaming connection (when you get picture size and other parameters) or it can be configured.
The most accurate approach, actually, is using of timestamps from server per each frame (which is either taken from file by demuxer or generated in image sensor driver in case of camera device). If MJPEG is packeted into RTP stream, these timestamps are already in RTP header. So, client's task is trivial: show picture using time calculating from time offset, current timestamp and time base.
Update
For the first solution:
time_to_sleep = time_to_sleep_base = 1/framerate;
number_of_frames = 0;
time = current_time();
while (1)
{
get_image();
show_image();
sleep (time_to_sleep);
/* update time to sleep */
number_of_frames++;
cur_time = current_time();
cur_framerate = number_of_frames/(cur_time - time);
if (cur_framerate > framerate)
time_to_sleep += alpha*time_to_sleep;
else
time_to_sleep -= alpha*time_to_sleep;
time = cur_time;
}
, where alpha is a constant parameter of reactivity of the feedback (0.1..0.5) to play with.
However, it's better to organize queue for input images to make the process of showing smoother. The size of queue can be parametrized and could be somewhere around 1 sec time of showing, i.e. numerically equal to framerate.

Resources