I've been looking for a way to modify a GStreamer stream in order to display the quality (by showing the raw video next to the result of the subtraction between the raw video and the enc/dec stream for example).
I've been learning GStreamer for a week and so far I was able to do what I was asked but right now I'm stuck.
I looked into the compositor element that seems to mix streams but I'm pretty sure it cannot do what I need.
Then I checked appsrc and appsink in some code. I tried to build a pipeline: filesrc - appsink - appsrc - filesink. But for obvious reasons it did not work. I browsed github projects but most of appsrc/appsink uses were just to programmaticaly do a task like reading a file.
Lastly I found someone with the same problem like me. He "solved" it by creating 2 pipelines (filesrc - appsink & appsrc - filesink) but he still got allocation errors. I was not even able to run the code he shared.
Does anyone have any idea on how to get it done in a clean way?
I found a way to modify a stream.
And it basically was to create my own plugin because by creating a element I can modify the stream between the source pad and the sink pad of my element.
If someone is interrested there is the documentation explaining how to create a plugin and here's my chain function where I modify the data between the pads:
static GstFlowReturn
gst_my_filter_chain (GstPad * pad, GstObject * parent, GstBuffer * buf)
{
GstMyFilter *filter;
filter = GST_MYFILTER (parent);
if (filter->silent == FALSE) {
GstMapInfo info;
if(gst_buffer_map(buf, &info, GST_MAP_READWRITE)) {
guint8* offset = info.data;
int i = 0;
while(i < info.maxsize) {
guint8* pos = offset + i;
guint8 c = 100; // Value that will be added to make the color lighter
int cc = (int) (*pos + c); // Adding the value to the color
guint8 color;
if(cc > 255) { // Making sure the color stays valid
color = 255;
}
else {
color = (guint8) cc;
}
memset(pos, color, 1); // Setting the value
i++;
}
}
gst_buffer_unmap(buf, &info);
}
return gst_pad_push (filter->srcpad, buf);
}
It's quite rudimentary and only lighten the video's colors but I can modify a stream so it's only a matter of time before I get the thing I want done.
Just hope it'll help someone
Compositor may be the simplest solution for what you're trying. You would have to use tee in order to fork into 2 sub-pipelines, one for direct video, and another one going thru encoding/decoding (here using jpeg), and compose both:
gst-launch-1.0 videotestsrc ! video/x-raw,width=640,height=480,framerate=30/1 ! tee name=t t. ! queue ! comp.sink_0 t. ! queue ! jpegenc ! jpegdec ! comp.sink_1 compositor name=comp sink_0::xpos=0 sink_1::xpos=640 ! autovideosink
Related
Hello GStreamer community & fans,
I have a working pipeline that connects to multiple H.264 IP camera streams using multiple rtspsrc elements aggregated into a single pipeline for downstream video processing.
Intermittently & randomly, streams coming in from remote & slower connections will have problems, timeout, retry and go dead, leaving that stream with a black image when viewing the streams post processing. The other working streams continue to process normally. The rtspsrc elements are setup to retry the rtsp connection, and that seems to somewhat work, but for when it doesn't, I'm looking for a way to disconnect the stream entirely from the rtspsrc element and restart that particular stream without disrupting the other streams.
I haven't found any obvious examples or ways to accomplish this, so I've been tinkering with the rtspsrc element code itself using this public function to access the rtspsrc internals that handle connecting.
__attribute__ ((visibility ("default"))) GstRTSPResult my_gst_rtspsrc_conninfo_reconnect(GstRTSPSrc *, gboolean);
GstRTSPResult
my_gst_rtspsrc_conninfo_reconnect(GstRTSPSrc *src, gboolean async)
{
int retries = 0, port = 0;
char portrange_buff[32];
// gboolean manual_http;
GST_ELEMENT_WARNING(src, RESOURCE, READ, (NULL),
(">>>>>>>>>> Streamer: A camera closed the streaming connection. Trying to reconnect"));
gst_rtspsrc_set_state (src, GST_STATE_PAUSED);
gst_rtspsrc_set_state (src, GST_STATE_READY);
gst_rtspsrc_flush(src, TRUE, FALSE);
// manual_http = src->conninfo.connection->manual_http;
// src->conninfo.connection->manual_http = TRUE;
gst_rtsp_connection_set_http_mode(src->conninfo.connection, TRUE);
if (gst_rtsp_conninfo_close(src, &src->conninfo, TRUE) == GST_RTSP_OK)
{
memset(portrange_buff, 0, sizeof(portrange_buff));
g_object_get(G_OBJECT(src), "port-range", portrange_buff, NULL);
for (retries = 0; portrange_buff[retries] && isdigit(portrange_buff[retries]); retries++)
port = (port * 10) + ((int)(portrange_buff[retries]) + 48);
if (port != src->client_port_range.min)
GST_ELEMENT_WARNING(src, RESOURCE, READ, (NULL), (">>>>>>>>>> Streamer: port range start mismatch"));
GST_WARNING_OBJECT(src, ">>>>>>>>>> Streamer: old port.min: %d, old port.max: %d, old port-range: %s\n", (src->client_port_range.min), (src->client_port_range.max), (portrange_buff));
src->client_port_range.min += 6;
src->client_port_range.max += 6;
src->next_port_num = src->client_port_range.min;
memset(portrange_buff, 0, sizeof(portrange_buff));
sprintf(portrange_buff, "%d-%d", src->client_port_range.min, src->client_port_range.max);
g_object_set(G_OBJECT(src), "port-range", portrange_buff, NULL);
for (retries = 0; retries < 5 && gst_rtsp_conninfo_connect(src, &src->conninfo, async) != GST_RTSP_OK; retries++)
sleep(10);
}
if (retries < 5)
{
gst_rtspsrc_set_state(src, GST_STATE_PAUSED);
gst_rtspsrc_set_state(src, GST_STATE_PLAYING);
return GST_RTSP_OK;
}
else return GST_RTSP_ERROR;
}
I realize this is probably not best practice and I'm doing this to find a better way once I understand the internals better through this learning experience.
I appreciate any feedback anyone has to this problem.
-Doug
I am relatively new to working with threads in Win32 api and have reached a problem that i am unable to work out.
Heres my problem, i have 4 threads (they work as intended) that allow the operator to test 4 terminals. In each thread i am trying to send a message to the main windows form with either Pass or Fail, this is placed within a listbox. Below is one of the threads, the remaining are exactly the same.
void Thread1(PVOID pvoid)
{
for(int i=0;i<numberOfTests1;i++) {
int ret;
double TimeOut = 60.0;
int Lng = 1;
test1[i].testNumber = getTestNumber(test1[i].testName);
unsigned char Param[255] = {0};
unsigned char Port1 = port1;
ret = PSB30_Open(Port1, 16);
ret = PSB30_SendOrder (Port1, test1[i].testNumber, &Param[0], &Lng, &TimeOut);
ret = PSB30_Close (Port1);
if(*Param == 1) SendDlgItemMessage(hWnd,IDT_RESULTLIST1,LB_ADDSTRING,i,(LPARAM)"PASS");
else SendDlgItemMessage(hWnd,IDT_RESULTLIST1,LB_ADDSTRING,i,(LPARAM)"FAIL");
}
_endthread();
}
I have debugged the code and it does everything except populate the listbox, i assume because its a thread i am missing something as the same code works outwith the thread. Do i need to put the thread to sleep while it sends the message to the main window?
Any help is appreciated.
Cheers
You don't want your secondary threads trying to manipulate your UI elements directly (such as the SendDlgItemMessage). Instead, you normally want to post something like a WM_COMMAND or WM_USER+N to the main window, and let that manipulate the UI elements accordingly.
i'm trying to create a simple Opencv program in C that creates a file capture from a .avi, and it plays it in a window highlighting faces. I'm running a self-compiled version of Opencv (i already tried the same with a jpeg image and it works).
Building goes well, no errors, no warning, but when i launch it this the console output this:
Unknown parameter encountered: "server role"
Ignoring unknown parameter "server role"
And the program simply stops
Previously it was complaining for a missing /home/#user/.smb/smb.conf file, so i tried installing samba ( even though i've still no idea what does samba have to do in all this )
here is my code:
main(){
printf("Ciao!");
cvNamedWindow("window", CV_WINDOW_AUTOSIZE);
cvWaitKey(0);
printf("ok");
CvCapture* capture = cvCreateFileCapture("monsters.avi");
CvHaarClassifierCascade* cascade = load_object_detector("haarcascade_frontalface_alt.xml");
CvMemStorage* storage = cvCreateMemStorage(0);
//List of the faces
CvSeq* faces;
while (0<10) {
CvArr* image = cvQueryFrame(capture);
double scale = 1;
faces = cvHaarDetectObjects(image,cascade, storage, 1.2, 2, CV_HAAR_DO_CANNY_PRUNING, cvSize(1,1), cvSize(300,300));
int i;
for(i = 0; i < faces->total; i++ )
{
CvRect face_rect = *(CvRect*)cvGetSeqElem( faces, i );
cvRectangle( image,
cvPoint(face_rect.x*scale,face_rect.y*scale),
cvPoint((face_rect.x+face_rect.width)*scale,(face_rect.y+face_rect.height)*scale),
CV_RGB(255,0,0) , 3, 8, 0);
}
cvReleaseMemStorage( &storage );
cvShowImage("window", image);
}
cvWaitKey(0);
printf("Ciao!");
}
I thank you for your answer, i switched to C++ for my trials. Now i did this:
int main(){
namedWindow("Video", CV_WINDOW_FREERATIO);
VideoCapture cap("sintel.mp4");
if(!cap.isOpened()) // check if we succeeded
return -1;
Mat edges;
for(;;){
Mat frame;
cap>>frame;
cvtColor(frame, edges, CV_BGR2GRAY);
GaussianBlur(edges, edges, Size(7,7), 1.5, 1.5);
Canny(edges, edges, 0, 30, 3);
imshow("Video", edges);
//cvWaitKey(0);
}
return(0);
}
Now it succesfully load the video and query a frame, evry time i press a key it obviously query another frame and everything works fine, but if i comment the waitkey() the program simply hangs for a bit and crashes if i try to close the window, i'm starting to think there is a problem with codecs or something like that...
There are so many potential problems in the code, most of them related to not coding defensively.
What is cvWaitKey(0); doing after cvNamedWindow()? It's unecessary, remove it!
What happens if the capture was unsucessful? Code defensively:
CvCapture* capture = cvCreateFileCapture("monsters.avi");
if (!capture)
{
// File not found, handle error and possibly quit the application
}
and you should use this technique for every pointer that you receive from OpenCV, ok?
One of the major problems, is that you allocate memory for CvMemStorage before the loop, but inside the loop you release it, which means that after the first loop iteration there will be no longer a valid CvMemStorage* storage, and that's a HUGE problem.
Either move the allocation procedure to the beginning of the loop, so on every iteration memory is allocated/deallocated, or move the cvReleaseMemStorage( &storage ); call out of the loop.
Now it works fine, i changed cvWaitKey() with this
if(waitKey(30) >= 0) break;
I don't understand exactly why but now everything works as it should :)
Developing iOS application which uses CoreAudio framework, I am dealing with IMHO nonsense behavior of SDL reg. playing audio. SDL plays audio in loop, and only way how to trigger playback is to call SDL_PauseAudio(0), and the only way how to stop it (without other side effects, which I won't talk about here) is to call SDL_PauseAudio(1). As far as I know.
What is the problem for me in SDL here? Simply - next call to SDL_PauseAudio(1) actually RESUMES the playback, causing the framework to play some mess *before asking for new sound data*. This is because of the way how SDL_CoreAudio.c implements the playback loop.
It means, that SDL does not implement STOP, it implements just PAUSE/RESUME and incorrectly manages audio processing. It means, that if you play sampleA, and want to play sampleB later on, you will hear also fragments of sampleA while expecting just to hear playback of sampleB.
If am wrong, please correct me.
If not, here's my diff, that I used to implement also STOP behavior: as soon as I finish playing sampleA, I call SDL_PauseAudio(2) so that playback loop quits and next call to SDL_PauseAudio(0) starts it again, this time by playing no mess from sampleA, but correctly playing just data from smapleB.
Index: src/audio/coreaudio/SDL_coreaudio.c
===================================================================
--- src/audio/coreaudio/SDL_coreaudio.c
+++ src/audio/coreaudio/SDL_coreaudio.c
## -250,6 +250,12 ##
abuf = &ioData->mBuffers[i];
SDL_memset(abuf->mData, this->spec.silence, abuf->mDataByteSize);
}
+ if (2 == this->paused)
+ {
+ // this changes 'pause' behavior to 'stop' behavior - next
+ // plyaing starts from beginning, i.e. it won't resume
+ this->hidden->bufferOffset = this->hidden->bufferSize;
+ }
return 0;
}
I am shamed that I edited SDL code, but I have no connection to authors and haven't found any help around. Well, it is strange for me, that no one seems to need STOP behavior in SDL?
A way around your issue is for instance to better manage your audio device with SDL. Here is what I suggest :
void myAudioCallback(void *userdata, Uint8 *stream, int len) { ... }
SDL_AudioDeviceID audioDevice;
void startAudio()
{
// prepare the device
static SDL_AudioSpec audioSpec;
SDL_zero(audioSpec);
audioSpec.channels = 2;
audioSpec.freq = 44100;
audioSpec.format = AUDIO_S16SYS;
audioSpec.userdata = (void*)myDataLocation;
audioSpec.callback = myAudioCallback;
audioDevice = SDL_OpenAudioDevice(NULL, 0, &audioSpec, &audioSpec, 0);
// actually start playback
SDL_PauseAudioDevice(audioDevice, 0);
}
void stopAudio()
{
SDL_CloseAudioDevice(audioDevice);
}
This works for me, the callback is not called after stopAudio() and no garbage is sent to the speaker either.
I’m trying to use OpenCV to write a video file. I have a simple program that loads frames from a video file then accepts to save them
At first the cvCreateVideoWrite always return NULL. I got a answer from your group saying it returns separate images and to try to change the file name to test0001.png, this worked.
But now the cvWriteFrame function always fails, the code is
CString path;
path="d:\\mice\\Test_Day26_2.avi";
CvCapture* capture = cvCaptureFromAVI(path);
IplImage* img = 0;
CvVideoWriter *writer = 0;
int isColor = 1;
int fps = 25; // or 30
int frameW = 640; // 744 for firewire cameras
int frameH = 480; // 480 for firewire cameras
writer=cvCreateVideoWriter("d:\\mice\\test0001.png",CV_FOURCC('P','I','M','1'),
fps,cvSize(frameW,frameH),isColor);
if (writer==0)
MessageBox("could not open writter");
int nFrames = 50;
for(int i=0;i<nFrames;i++){
if (!cvGrabFrame(capture))
MessageBox("could not grab frame");
img=cvRetrieveFrame(capture); // retrieve the captured frame
if (img==0)
MessageBox("could not retrive data");
if (!cvWriteFrame(writer,img) )
MessageBox("could not write frame");
}
cvReleaseVideoWriter(&writer);
Try CV_FOURCC('D', 'I', 'V', 'X'), CV_FOURCC('f', 'f', 'd', 's') (with *.avi filename) or CV_FOURCC_DEFAULT (with *.mpg). Video writing is still quite messy in opencv >_>
I've seen many issues with writing video as well in OpenCV. I found intel iYUV format worked well for what I needed.
Was your library built with HAVE_FFMPEG defined?
If it wasn't,you might need to recompile opencv with that option.You should see something like this in the configure step:
...
Video I/O--------------
Use QuickTime no
Use xine no
Use ffmpeg: yes
Use v4l yes
...
If you don't have ffmpeg,you can get it from here.