Hwaccel with AVCodec in FFMPEG? - c

I am using AVCodec as a video stream decoder and would like to know if it was possible to use hardware acceleration with hwaccel via FFMPEG? or is it already used by default?
I have already listed codecs available but I do not understand how to implement them in my code.
AVHWAccel* pHwaccel = NULL;
pHwaccel = av_hwaccel_next(NULL);
while(pHwaccel!=NULL)
{
TkCore::Logger::info("%s", pHwaccel->name);
pHwaccel = av_hwaccel_next(pHwaccel);
}
i obtain : h264_qsv, h264_vaapi,h264_vdpaufor h264.
I saw that the command :
AVHWAccel * ff_find_hwaccel (codecID enum codec_id, enum PixelFormat pix_fmt)
been obsolete.
Thank you in advance for your help.

I realized the call of the decoder with "avcodec_find_encoder" but I do not see how to apply hardware acceleration to this decoded frame ... I have seen that pix_fmt was able to assign material acceleration, for example if pix_fmt = This is exactly as it was in h264. The only question is what function is used to apply this vdpau acceleration ...

See this thread on libav-user. Basically after listing the hw accelerated codecs you can try to lookup the appropriate decoder with avcodec_find_decoder_by_name (since the AVHWAccel struct has the name field) and then use that for decoding. But then you need to know the codec upfront. If you use avformat_open_input then you may simply try to find a matching hw accelerated decoder by the codec id from the stream info, then open the hw accelerated codec by name and use that.
Update, since I got downvoted
To provide a working example of this, I have an ffmpeg installation from homebrew, that lists videotoolbox (which is a HW accelerated codec) via ffmpeg -encoders | grep h264:
V..... h264_videotoolbox VideoToolbox H.264 Encoder (codec h264)
And the following snippet also finds it:
extern "C"
{
#include <libavcodec/avcodec.h>
}
int main(int argc, char** argv)
{
auto *codec = avcodec_find_encoder_by_name("h264_videotoolbox");
if (codec)
{
return 0;
}
return 1;
}
Moreover, if you check what avcodec_find_encoder_by_name/avcodec_find_encoder_by_name does, it is visible that it basically iterates the whole codec list just by applying a filter to distinguish encoders/decoders:
static AVCodec *find_codec_by_name(const char *name, int (*x)(const AVCodec *))
{
void *i = 0;
const AVCodec *p;
if (!name)
return NULL;
while ((p = av_codec_iterate(&i))) {
if (!x(p))
continue;
if (strcmp(name, p->name) == 0)
return (AVCodec*)p;
}
return NULL;
}
AVCodec *avcodec_find_encoder_by_name(const char *name)
{
return find_codec_by_name(name, av_codec_is_encoder);
}
AVCodec *avcodec_find_decoder_by_name(const char *name)
{
return find_codec_by_name(name, av_codec_is_decoder);
}
The av_codec_iterate will iteratre the codec_list variable, which is a pregenerated list of supported codecs (populated by the features enabled when configuring the build). So if any hw accelerated codecs were enable during configuration, they will be there.

Related

Change name of Application in Pipewire/Pulseaudio

I am currently trying to build a very simple Audio-Tool, which needs to change its name in pavucontrol and qjackctl on runtime. When an Application produces Audio, its name is shown in pavucontrol. E.g. if I use firefox it is shown as "Firefox". I tried the most commonly suggested solutions: Editing argv and using prctl both did not succeed.
I also searched the pipewire documentation but I didn't find anything useful (but maybe I am just blind).
Is it even possible? From where does pipewire get the name of the Application?
Here is a little test-script in C with SDL2:
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <SDL2/SDL.h>
Uint8* audio_buffer = NULL;
Uint32 audio_length = 0;
void audio_callback(void* userdata, Uint8* stream, int n) {
memset(stream, 0, n);
}
int main(int argc, char** argv) {
SDL_Event evt;
SDL_AudioSpec desired;
SDL_Init(SDL_INIT_AUDIO|SDL_INIT_EVENTS);
SDL_LoadWAV("suil.wav", &desired, &audio_buffer, &audio_length);
desired.callback = audio_callback;
SDL_OpenAudio(&desired, NULL);
SDL_PauseAudio(0);
while (1) {
while (SDL_PollEvent(&evt)) {
switch (evt.type) {
case SDL_QUIT:
exit(EXIT_SUCCESS);
}
}
}
}
And a picture of what I would like to have changed on runtime:
(Note: The "test" would be the name in question.)
Disclaimer:
I'm not sure if this would maybe sdl-2 specific, so I added the SDL tag.
SDL's Pipewire backend grabs the application name in this block:
/* Get the hints for the application name, stream name and role */
app_name = SDL_GetHint(SDL_HINT_AUDIO_DEVICE_APP_NAME);
if (!app_name || *app_name == '\0') {
app_name = SDL_GetHint(SDL_HINT_APP_NAME);
if (!app_name || *app_name == '\0') {
app_name = "SDL Application";
}
}
...via the hint system:
SDL_HINT_APP_NAME:
/**
* \brief Specify an application name.
*
* This hint lets you specify the application name sent to the OS when
* required. For example, this will often appear in volume control applets for
* audio streams, and in lists of applications which are inhibiting the
* screensaver. You should use a string that describes your program ("My Game
* 2: The Revenge")
*
* Setting this to "" or leaving it unset will have SDL use a reasonable
* default: probably the application's name or "SDL Application" if SDL
* doesn't have any better information.
*
* Note that, for audio streams, this can be overridden with
* SDL_HINT_AUDIO_DEVICE_APP_NAME.
*
* On targets where this is not supported, this hint does nothing.
*/
#define SDL_HINT_APP_NAME "SDL_APP_NAME"
SDL_HINT_AUDIO_DEVICE_APP_NAME:
/**
* \brief Specify an application name for an audio device.
*
* Some audio backends (such as PulseAudio) allow you to describe your audio
* stream. Among other things, this description might show up in a system
* control panel that lets the user adjust the volume on specific audio
* streams instead of using one giant master volume slider.
*
* This hints lets you transmit that information to the OS. The contents of
* this hint are used while opening an audio device. You should use a string
* that describes your program ("My Game 2: The Revenge")
*
* Setting this to "" or leaving it unset will have SDL use a reasonable
* default: this will be the name set with SDL_HINT_APP_NAME, if that hint is
* set. Otherwise, it'll probably the application's name or "SDL Application"
* if SDL doesn't have any better information.
*
* On targets where this is not supported, this hint does nothing.
*/
#define SDL_HINT_AUDIO_DEVICE_APP_NAME "SDL_AUDIO_DEVICE_APP_NAME"
...and then passes the app name into Pipewire using PW_KEY_APP_NAME, here:
PIPEWIRE_pw_properties_set(props, PW_KEY_APP_NAME, app_name);
...where SDL's PIPEWIRE_pw_properties_set() is just a pointer to Pipewire's pw_properties_set().

The correct way to communicate with Glib/Gtk from another thread [duplicate]

I need to attach file descriptors to the GLIB mainloop. My issue is that the list of file descriptors is not fixed during execution.
According to GLIB documentation, I can:
create a GIOChannel for each FD using g_io_channel_unix_new and attach it to the context with g_io_add_watch
Use a Gsource created with g_io_create_watch and set a callback g_source_set_callback
My question is : is it possible to modify dynamically a source or a context. And how can I do it ? I find the GSourceFuncs ability, but that doesn't fit my issue.
Thanks for your help.
g_io_add_watch returns an event source ID which you can later use to dynamically remove the watch again, using g_source_remove. Use one event source per FD and instead of modifying existing watches, remove the old ones and create appropriate new ones.
I digged more into GLIB and now:
I create a source with callbacks functions (prepare, check, dispatch, finalize)
In the prepare callback, FD are deleted using g_source_remove_unix_fd() and then added to the current source using g_source_add_unix_fd().
I returned FALSE to set the timeout (1s for my example)
My issue is that without the FD, the prepare callback is called each 1s as expected. When FD is added, the prepare callback is called without timeout. the poll exit directly.
I have a look into GLIB source code, but don't understand the reason why ?
Help please
Regards
amenophiks' answer is the best.
If you want your code to work with an older glib you can use:
g_source_add_poll()
g_source_remove_poll()
Have you read the Main Event Loop documentation? The description section has a pretty good explanation of how things work.
Have you looked at the Custom GSource tutorial? This allows you to extend a GSource object to include your own state. You also can write your own prepare, dispatch, query, and check functions.
Whenever I really want to see how something should be done with GLib, GTK, etc the first place I look is the test code that lives in their git repository. Be sure to checkout the proper tag for the version of that you are targeting.
For example I currently target 2.48.2
Here are two pretty good examples
https://gitlab.gnome.org/GNOME/glib/blob/2.48.2/tests/mainloop-test.c
https://gitlab.gnome.org/GNOME/glib/blob/2.48.2/glib/tests/mainloop.c
The other nice feature is it's a git repository so you can search it very easily.
Seems, I found a diminutive hook. Try this:
struct source {
GSource gsrc;
GPollFD *gpfd;
};
struct data {
/* A something data. */
};
static gboolean gsrc_dispatch(GSource *gsrc, GSourceFunc cb, gpointer data);
static struct data * data_alloc(void);
static GSourceFuncs gsf = {
.prepare = NULL,
.check = NULL,
.dispatch = gsrc_dispatch,
.finalize = NULL
};
int main(void)
{
struct source *src;
int fd;
struct data *data = data_alloc();
/* Something other. */
/* For example, we are want to capture video from a camera. */
fd = open("/dev/video0", O_RDWR);
if (fd < 0) {
perror("open()");
return -1;
}
src = (struct source *) g_source_new(&gsf, sizeof(struct source));
src->gpfd = g_source_add_unix_fd((GSource *) src, fd, G_IO_IN);
g_source_set_callback((GSource *) src, NULL, data, NULL);
g_source_attach((GSource *) src, NULL);
/* Something other and free. */
return 0;
}
static gboolean
gsrc_dispatch(GSource *gsrc, GSourceFunc cb, gpointer data)
{
struct source *src = (struct source *) gsrc;
struct data *d = data;
if (src->gpfd != NULL) {
if (src->gpfd->revents & G_IO_IN) {
/* Capture a frame. */
}
}
g_main_context_iteration(NULL, TRUE);
return G_SOURCE_CONTINUE;
}
static struct data *
data_alloc(void)
{
/* Allocate a data. */
}
Yes, you can use the double gpfd pointer.

call ff_print_debug_info2 outside of mpegvideo.c file

I want void ff_print_debug_info2(...) to be called outside of mpegvideo.c file. For instance, I want to call this function inside the following code snippet:
static int decode_packet(int *got_frame, int cached)
{
int ret = 0;
int decoded = pkt.size;
*got_frame = 0;
if (pkt.stream_index == video_stream_idx) {
/* decode video frame */
ret = avcodec_decode_video2(video_dec_ctx, frame, got_frame, &pkt);
if (ret < 0) {
fprintf(stderr, "Error decoding video frame (%s)\n", av_err2str(ret));
return ret;
}
if (*got_frame) {
/*here I want to print debug info*/
//void ff_print_debug_info2(AVCodecContext *avctx, AVFrame *pict, uint8_t *mbskip_table, uint32_t *mbtype_table, int8_t *qscale_table, int16_t (*motion_val[2])[2], int *low_delay, int mb_width, int mb_height, int mb_stride, int quarter_sample)
}
}
return decoded;
}
I wonder if it's possible, and how shall I pass parameters into void ff_print_debug_info2(...)?
ps: parameters I already know:
1.AVCodecContext *avctx: video_dec_ctx
2.AVFrame *pict: frame
3.int8_t *qscale_table: frame->qscale_table.
How about the others?
This function is already called for you by the H264 decoder. It is unsupported by any other decoder and will cause crashes. You should never need to call it manually. If you're not seeing any debug information printed on the frame after H264 frame decoding, try to use:
avctx->debug |= FF_DEBUG_VIS_QP |
FF_DEBUG_VIS_MB_TYPE |
FF_DEBUG_SKIP |
FF_DEBUG_QP |
FF_DEBUG_MB_TYPE;
avctx->debug_mv = FF_DEBUG_VIS_MV_P_FOR |
FF_DEBUG_VIS_MV_B_FOR |
FF_DEBUG_VIS_MV_B_BACK;
after (thanks for the correction!) your call to avcodec_open2(). After that, you should see the appropriate debug information printed on the frame (*_VIS_*) or on the terminal (the others).
These flags are also supported by the MPEG-1/2/4 decoders, although they are implemented through a different function (ff_print_debug_info()).
As you can see on the manual you can use outside mpegvideo.c scope.
The function ff_print_debug_info2 is exported with mpegvideo.h
So just
#include <mpegvideo.h>
or
#include "mpegvideo..h"
in source file where function has to be used.

ALSA equivalent to /dev/audio dump?

This will be my poorest question ever...
On an old netbook, I installed an even older version of Debian, and toyed around a bit. One of the rather pleasing results was a very basic MP3 player (using libmpg123), integrated for adding background music to a little application doing something completely different. I grew rather fond of this little solution.
In that program, I dumped the decoded audio (from mpg123_decode()) to /dev/audio via a simple fwrite().
This worked fine - on the netbook.
Now, I came to understand that /dev/audio was something done by OSS, and is no longer supported on newer (ALSA) machines. Sure enough, my laptop (running a current Linux Mint) does not have this device.
So apparently I have to use ALSA instead. Searching the web, I've found a couple of tutorials, and they pretty much blow my mind. Modes, parameters, capabilities, access type, sample format, sample rate, number of channels, number of periods, period size... I understand that ALSA is a powerful API for the ambitious, but that's not what I am looking for (or have the time to grok). All I am looking for is how to play the output of mpg123_decode (the format of which I don't even know, not being an audio geek by a long shot).
Can anybody give me some hints on what needs to be done?
tl;dr
How do I get ALSA to play raw audio data?
There's an OSS compatibility layer for ALSA in the alsa-oss package. Install it and run your program inside the "aoss" program. Or, modprobe the modules listed here:
http://wiki.debian.org/SoundFAQ/#line-105
Then, you'll need to change your program to use "/dev/dsp" or "/dev/dsp0" instead of "/dev/audio". It should work how you remembered... but you might want to cross your fingers just in case.
You could install sox and open a pipe to the play command with the correct samplerate and sample size arguments.
Using ALSA directly is overly complicated, so I hope a Gstreamer solution is fine to you too. Gstreamer gives a nice abstraction to ALSA/OSS/Pulseaudio/you name it -- and is ubiquitous in the Linux world.
I wrote a little library that will open a FILE object where you can fwrite PCM data into:
Gstreamer file. The actual code is less than 100 lines.
Use use it like that:
FILE *output = fopen_gst(rate, channels, bit_depth); // open audio output file
while (have_more_data) fwrite(data, amount, 1, output); // output audio data
fclose(output); // close the output file
I added an mpg123 example, too.
Here is the whole file (in case Github get's out of business ;-) ):
/**
* gstreamer_file.c
* Copyright 2012 René Kijewski <rene.SURNAME#fu-berlin.de>
* License: LGPL 3.0 (http://www.gnu.org/licenses/lgpl-3.0)
*/
#include "gstreamer_file.h"
#include <stdbool.h>
#include <stdlib.h>
#include <unistd.h>
#include <glib.h>
#include <gst/gst.h>
#ifndef _GNU_SOURCE
# error "You need to add -D_GNU_SOURCE to the GCC parameters!"
#endif
/**
* Cookie passed to the callbacks.
*/
typedef struct {
/** { file descriptor to read from, fd to write to } */
int pipefd[2];
/** Gstreamer pipeline */
GstElement *pipeline;
} cookie_t;
static ssize_t write_gst(void *cookie_, const char *buf, size_t size) {
cookie_t *cookie = cookie_;
return write(cookie->pipefd[1], buf, size);
}
static int close_gst(void *cookie_) {
cookie_t *cookie = cookie_;
gst_element_set_state(cookie->pipeline, GST_STATE_NULL); /* we are finished */
gst_object_unref(GST_OBJECT(cookie->pipeline)); /* we won't access the pipeline anymore */
close(cookie->pipefd[0]); /* we won't write anymore */
close(cookie->pipefd[1]); /* we won't read anymore */
free(cookie); /* dispose the cookie */
return 0;
}
FILE *fopen_gst(long rate, int channels, int depth) {
/* initialize Gstreamer */
if (!gst_is_initialized()) {
GError *error;
if (!gst_init_check(NULL, NULL, &error)) {
g_error_free(error);
return NULL;
}
}
/* get a cookie */
cookie_t *cookie = malloc(sizeof(*cookie));
if (!cookie) {
return NULL;
}
/* open a pipe to be used between the caller and the Gstreamer pipeline */
if (pipe(cookie->pipefd) != 0) {
close(cookie->pipefd[0]);
close(cookie->pipefd[1]);
free(cookie);
return NULL;
}
/* set up the pipeline */
char description[256];
snprintf(description, sizeof(description),
"fdsrc fd=%d ! " /* read from a file descriptor */
"audio/x-raw-int, rate=%ld, channels=%d, " /* get PCM data */
"endianness=1234, width=%d, depth=%d, signed=true ! "
"audioconvert ! audioresample ! " /* convert/resample if needed */
"autoaudiosink", /* output to speakers (using ALSA, OSS, Pulseaudio ...) */
cookie->pipefd[0], rate, channels, depth, depth);
cookie->pipeline = gst_parse_launch_full(description, NULL,
GST_PARSE_FLAG_FATAL_ERRORS, NULL);
if (!cookie->pipeline) {
close(cookie->pipefd[0]);
close(cookie->pipefd[1]);
free(cookie);
return NULL;
}
/* open a FILE with specialized write and close functions */
cookie_io_functions_t io_funcs = { NULL, write_gst, NULL, close_gst };
FILE *result = fopencookie(cookie, "w", io_funcs);
if (!result) {
close_gst(cookie);
return NULL;
}
/* start the pipeline (of cause it will wait for some data first) */
gst_element_set_state(cookie->pipeline, GST_STATE_PLAYING);
return result;
}
And ten years later, the "actual" answer is found: That's the wrong way to do it in the first place.
libmpg123 comes with a companion library, libout123, which abstracts the underlying audio system for you. Based on libmpg123 example code:
#include <stdlib.h>
#include "mpg123.h"
#include "out123.h"
int main()
{
mpg123_handle * _mpg_handle;
out123_handle * _out_handle;
double rate, channels, encoding;
size_t position, buffer_size;
unsigned char * buffer;
char filename[] = "Example.mp3";
mpg123_open( _mpg_handle, filename );
mpg123_getformat( _mpg_handle, &rate, &channels, &encoding );
out123_open( _out_handle, NULL, NULL );
mpg123_format_none( _mpg_handle );
mpg123_format( _mpg_handle, rate, channels, encoding );
out123_start( _out_handle, rate, channels, encoding );
buffer_size = mpg123_outblock( _mpg_handle );
buffer = malloc( buffer_size );
do
{
mpg123_read( _mpg_handle, buffer.get(), buffer_size, &position );
out123_play( _out_handle, buffer.get(), position );
} while ( position );
out123_close( _out_handle );
mpg123_close( _mpg_handle );
free( buffer );
}

What is a lightweight cross platform WAV playing library?

I'm looking for a lightweight way to make my program (written in C) be able to play audio files on either windows or linux. I am currently using windows native calls, which is essentially just a single call that is passed a filename. I would like something similar that works on linux.
The audio files are Microsoft PCM, Single channel, 22Khz
Any Suggestions?
Since I'm also looking for an answer for question I did a bit of research, and I haven't find any simple (simple like calling one function) way to play an audio file. But with some lines of code, it is possible even in a portable way using the already mentioned portaudio and libsndfile (LGPL).
Here is a small test case I've written to test both libs:
#include <portaudio.h>
#include <sndfile.h>
static int
output_cb(const void * input, void * output, unsigned long frames_per_buffer,
const PaStreamCallbackTimeInfo *time_info,
PaStreamCallbackFlags flags, void * data)
{
SNDFILE * file = data;
/* this should not actually be done inside of the stream callback
* but in an own working thread
*
* Note although I haven't tested it for stereo I think you have
* to multiply frames_per_buffer with the channel count i.e. 2 for
* stereo */
sf_read_short(file, output, frames_per_buffer);
return paContinue;
}
static void
end_cb(void * data)
{
printf("end!\n");
}
#define error_check(err) \
do {\
if (err) { \
fprintf(stderr, "line %d ", __LINE__); \
fprintf(stderr, "error number: %d\n", err); \
fprintf(stderr, "\n\t%s\n\n", Pa_GetErrorText(err)); \
return err; \
} \
} while (0)
int
main(int argc, char ** argv)
{
PaStreamParameters out_param;
PaStream * stream;
PaError err;
SNDFILE * file;
SF_INFO sfinfo;
if (argc < 2)
{
fprintf(stderr, "Usage %s \n", argv[0]);
return 1;
}
file = sf_open(argv[1], SFM_READ, &sfinfo);
printf("%d frames %d samplerate %d channels\n", (int)sfinfo.frames,
sfinfo.samplerate, sfinfo.channels);
/* init portaudio */
err = Pa_Initialize();
error_check(err);
/* we are using the default device */
out_param.device = Pa_GetDefaultOutputDevice();
if (out_param.device == paNoDevice)
{
fprintf(stderr, "Haven't found an audio device!\n");
return -1;
}
/* stero or mono */
out_param.channelCount = sfinfo.channels;
out_param.sampleFormat = paInt16;
out_param.suggestedLatency = Pa_GetDeviceInfo(out_param.device)->defaultLowOutputLatency;
out_param.hostApiSpecificStreamInfo = NULL;
err = Pa_OpenStream(&stream, NULL, &out_param, sfinfo.samplerate,
paFramesPerBufferUnspecified, paClipOff,
output_cb, file);
error_check(err);
err = Pa_SetStreamFinishedCallback(stream, &end_cb);
error_check(err);
err = Pa_StartStream(stream);
error_check(err);
printf("Play for 5 seconds.\n");
Pa_Sleep(5000);
err = Pa_StopStream(stream);
error_check(err);
err = Pa_CloseStream(stream);
error_check(err);
sf_close(file);
Pa_Terminate();
return 0;
}
Some notes to the example. It is not good practice to do the data loading inside of the stream callback, but inside an own loading thread. If you need to play several audio files it becomes even more difficult, because not all portaudio backends support multiple streams for one device, for example the OSS backend doesn't, but the ALSA backend does. I don't know how the situation is on windows. Since all your input files are of the same type you could mix them on you own, which complicates the code a bit more, but then you'd have also support for OSS. If you would have also different sample rates or number of channels, it'd become very difficult.
So If you don't want to play multiple files at the same time, this could be a solution or at least a start for you.
SDL_Mixer, although not very lightweight, does have a simple interface to play WAV files. I believe, like SDL, SDL_Mixer is also LGPL.
OpenAL is another cross platform audio library that is more geared towards 3D audio.
Yet another open source audio library that you might want to check it out is PortAudio
I've used OpenAL to play wav files as alerts/warnings in an Air Traffic Control system
The advantages I've found are
it is cross platform
works with C (and others but your question is about C)
light weight
good documentation available on the web
the license is LGPL so you call the API with no license problems
You can try with this one: libao
I like FMOD. The license is free for personal use, and very reasonable for small shareware or commercial projects
You could also try Audiere. The last release is dated 2006, but it is open-source and licensed under the LGPL.
I used irrKlang !
"irrKlang is a cross platform sound library for C++, C# and all .NET languages"
http://www.ambiera.com/irrklang/

Resources