i'm developing a video server in C on GNU/Linux, and i'm using ffmpeg to manage data of each video file. So, i open the file, get all the information about its container, then do the same with its codec and start reading frames one by one.
Unfortunately, ffmpeg and more precisely avcodec is not very well documented. I need to know when a frame is an I-Frame or a B-Frame to maintain a record, so how could i do it?
Thanks in advance.
The picture type is given by the pict_type field of struct AVFrame. You have 4 types defined in FFMPEG. pict_type is set to FF_I_TYPE for I Frames.
For example, part of my debug code which give me a letter to set in a debug message :
/* _avframe is struct AVFrame* */
switch(_avframe->pict_type)
{
case FF_I_TYPE:
return "I";
break;
case FF_P_TYPE:
return "P";
break;
case FF_S_TYPE:
return "S";
break;
case FF_B_TYPE:
return "B";
break;
}
Manuel,
Have you tried FF-probe yet? It is a multimedia streams analyzer that allows you to see the type of each frame. You can download it from SourceForget.net. To compile it you will need Gnu autoconf, a C compiler and a working installation of the FFmpeg. Let me know if that helps.
Related
We have written a short code in C code to read a video file, using common libraries as libavcodec, libavformat, etc.
The code is running smoothly but only using the CPU resources. We'd need to run the code on the GPU (Nvidia GeForce 940MX and 1080Ti). Is there a way to force the code to be run on the GPU?
While using the command line (e.g., ffmpeg -hwaccel cuvid -i vid.mp4 out.avi) things are fine, we are not able to have it working on the GPU from the source code.
We are working with Ubuntu 18.04, and ffmpeg correctly compiled with CUDA 9.2
There are pretty good examples for using libav (ffmpeg) for encoding and decoding video at https://github.com/FFmpeg/FFmpeg/tree/master/doc/examples.
For what you need is demuxing_decoding.c example and change the lines 166 which is:
/* find decoder for the stream */
dec = avcodec_find_decoder(st->codecpar->codec_id);
with
/* find decoder for the stream */
if (st->codecpar->codec_id == AV_CODEC_ID_H264)
{
dec = avcodec_find_decoder_by_name("h264_cuvid");
}
else if (st->codecpar->codec_id == AV_CODEC_ID_HEVC)
{
dec = avcodec_find_decoder_by_name("hevc_cuvid");
}
else
{
dec = avcodec_find_decoder(st->codecpar->codec_id);
}
add/change lines for other formats. And make sure your FFmpeg compiled with --enable-cuda --enable-cuvid
In my tests I got error comes from line 85: because nvdec (hevc_cuvid) uses p010 internal format for 10bit (input is yuv420p10). Which means decoded frame will be either NV12 pixel format or P010 depending on bit depth. I hope you are familiar with pixel formats.
Hope that helps.
I need to open the default audio capture device and start recording. libsox seems to be a nice cross-platform solution. Using the binary frontend, I can just rec test.wav and the default microphone is activated.
However, when browsing the documentation, no similar functionality exists. This thread discusses precisely the same topic as my question, but doesn't seem to have reached a solution.
Where could an example of using libsox for recording from the default audio device be located?
You can record using libsox. Just set the input file to "default" and set the filetype to the audio driver (e.g. coreaudio on mac, alsa or oss on linux)
const char* audio_driver = "alsa";
sox_format_t* input = sox_open_read("default", NULL, NULL, audio_driver);
Look at some examples for more info on how to structure the rest of the code.
You need to record with alsa first and use libsox for the right format. libsox is not for recording. see example: https://gist.github.com/albanpeignier/104902
I am coding an app in C for windows using openCV. I want to capture video from the webcam and show it in a window.
The app is almost finished but it doesn't work properly. I think it's because of the cvQueryFrame() that alwasy returns NULL and I don't know why. I tried capturing some frames before going into the while but didn't fix the problem.
The compiler doesn't show me any error. It's not a compiling problem but an execution one. I debugged it step by step and in the line
if(!originalImg) break;
it allways jumps out of the while. That's why the app doesn't remain in execution. It opens and closes very fast.
Here's the code:
void main()
{
cvNamedWindow("Original Image", CV_WINDOW_AUTOSIZE);
while (1)
{
originalImg = cvQueryFrame(capture);
if(!originalImg) break;
cvShowImage("Original Image", originalImg);
c = cvWaitKey(10);
if( c == 27 ) break;
}
cvReleaseCapture(&capture);
cvDestroyWindow("Original Image");
}
Let's see if someone have some idea and can help me with this, thanks!
Assuming the compilation was ok (included all relevant libraries), it may be that the camera has not been installed properly. Can you check if you are able to use the webcam otherwise (using some other software)
If the compilation is actually the issue, please refer to the following related question:
https://stackoverflow.com/a/5313594/1218748
Quick summary:
Recompile opencv_highgui changing the "Preprocesser Definitions" in the C/C++ panel of the properties page to include: HAVE_VIDEOINPUT HAVE_DSHOW
There are other good answers that raise some relvant good points, but my gut feeling is that the above solution would work :-)
It seems you have not opened capture.
Add in the beginning of main:
CvCapture* capture = 0;
capture = cvCaptureFromCAM(0);
I've taken xinimin.c and added seek and osd functionality. The last big piece that I need to implement is deinterlacing, however, I'm finding very little documentation. I've been through the hacker's guide and of course I haved googled and googled. I found the deprecated method:
xine_set_param(stream, XINE_PARAM_VO_DEINTERLACE, 1);
which did not work. I saw that the current method involves post plugins, but my /usr/include/xine/post.h doesn't have the word deinterlace in it.
Can anyone provide an example of how to implement deinterlacing. It would be nice to have the flexibility down the road to change the deinterlacer, but something equivalent to the -D option on the command line is what I'm looking for to start with.
Is there a good resource for example source files?
Is this what you are looking for? src/post/deinterlace directory:
(4 links to pretty much the same:)
debian.org/hg/xine-lib
github || huceke/xine-lib-vaapi/tree/master/src/post/deinterlace
fossies dox ** xine-lib 1.2.1, deinterlace.h File Reference
xine_plugin.c File Reference
From hackersguide - Walking the source tree:
post
Video and audio post effect plugins live here. Post plugins
modify streams of video frames or audio buffers as they leave
the decoder to provide conversion or effects.
deinterlace (imported)
The tvtime deinterlacer as a xine video filter post.
hackersguide, Plugin system
Edit:
I have only installed libxine, not anything else.
It is however a good idea to download the source as the project in large part is documented in the code. If you use i.e. Vim it is nice to use it with cscope and/or ctags. (As show in this tutorial.). Then you can jump to functions, definitions, callers, ... (across files etc), simply by a couple of key strokes. (They map where every function is defined, called, etc.)
When compiling, if on linux at least, I have to add the lib's at end (after source file):
gcc -Wall -Wextra -pedantic -std=c89 -o muxine muxine.c -lX11 -lxine
Perhaps this'll get you further on the way: Using sample code, muxine.c,:
And reading source documentation, (mainly here): xine.h
I added in main:
const char* const *tmp;
xine_post_t *post_x;
xine_post_api_t *post_api;
xine_post_in_t *input_api;
xine_post_api_descr_t *param;
const char *post_plug_t = "tvtime";
/*const char *post_plug_d = "deinterlace"; perhaps this?? */
After the existing sample code:
ao_port = xine_open_audio_driver(xine , ao_driver, NULL);
stream = xine_stream_new(xine, ao_port, vo_port);
I added
/* get a list of all available post plugins */
if((tmp = xine_list_post_plugins(xine)) == NULL) {
fprintf(stderr, "Unable to get post plugins\n");
xine_exit(xine);
return 1;
}
printf("Post plugins:\n");
while (*tmp != NULL)
printf(" %s\n", *tmp++);
/* initialize a post plugin */
if ((post_x = xine_post_init(xine, post_plug_t, 1,
&ao_port, &vo_port)) == NULL) {
fprintf(stderr, " *ERR: Unable to 'post init' %s;\n",
post_plug_t);
xine_exit(xine);
return 1;
}
/* get a list of all outputs of a post plugin */
tmp = xine_post_list_outputs(post_x);
printf("Post List Outputs:\n");
while (*tmp != NULL)
printf(" %s\n", *tmp++);
! I get at least tvtime. Believe that is the deinterlacing plugin. (Sounds like it when reading the src/post/... comments.
One have the struct xine_post_api_t whic in turn has set_parameters(), that can be used to control the plugin (from the looks of it).
is there an example of a full-duplex ALSA connection in C? I've read that it is supported, but all the introductory examples I saw did either record or play a sound sample, but I'd like to have one handler that can do both for my VoIP-app.
Big thanks for help,
Jens
Some guy named Alan has published this good (but old) tutorial, Full Duplex ALSA, which is written in C.
You provide a link to both handles and pump them in turn.
Here's alan's code elided and commented.
// the device plughw handle dynamic sample rate and type conversion.
// there are a range of alternate devices defined in your alsa.conf
// try:
// locate alsa.conf
// and check out what devices you have in there
//
// The following device is PLUG:HW:Device:0:Subdevice:0
// Often simply plug, plughw, plughw:0, will have the same effect
//
char *snd_device_in = "plughw:0,0";
char *snd_device_out = "plughw:0,0";
// handle constructs to populate with our links
snd_pcm_t *playback_handle;
snd_pcm_t *capture_handle;
//this is the usual construct... If not fail BLAH
if ((err = snd_pcm_open(&playback_handle, snd_device_out, SND_PCM_STREAM_PLAYBACK, 0)) < 0) {
fprintf(stderr, "cannot open output audio device %s: %s\n", snd_device_in, snd_strerror(err));
exit(1);
}
// And now the CAPTURE
if ((err = snd_pcm_open(&capture_handle, snd_device_in, SND_PCM_STREAM_CAPTURE, 0)) < 0) {
fprintf(stderr, "cannot open input audio device %s: %s\n", snd_device_out, snd_strerror(err));
exit(1);
}
then config and pump them.
A ring mod could do the job: http://soundprogramming.net/programming_and_apis/creating_a_ring_buffer or you could use alans way outlined above.
It was my first requirements to a Linux/Unix VoIP projects where I need to know about all of the available audio devices capability and name. Then I need to use these devices to capture and playback the audio.
For everyone's help I have made a (.so) library and a sample Application demonstrating the use of this library in c++.
The output of my library is like-
[root#~]# ./IdeaAudioEngineTest
HDA Intel plughw:0,0
HDA Intel plughw:0,2
USB Audio Device plughw:1,0
The library provides functionality to capture and playback real-time audio data.
Full source with documentation is available in IdeaAudio library with Duplex Alsa Audio
Library source is now open at github.com
See also latency.c, included in alsa-lib source; on the ALSA wiki:
http://www.alsa-project.org/main/index.php/Test_latency.c