Changing playback speed using gstreamer - c

I am currently working on the gstreamer tutorials, in particular the one about playback speed adjustments. I pasted the example code into a file which I compiled with the flags pkg-config --cflags --libs gstreamer-1.0 (my gstreamer version is 1.20.5)
I tried to change the playback rates using the keys S / s, got the corresponding prints (Current rate: 0.5 etc.) but the playback speed stayed constant at 1. I thought that the failure to change the playback speed was due to the source being a remote file, so I changed the code to use local files (as command line arguments) instead:
gchar buffer[4096];
g_snprintf(buffer, 4096, "playbin uri=file://%s", argv[1]);
/* Build the pipeline */
data.pipeline = gst_parse_launch(buffer, &error);
I also switched from the video sink to an audio sink (audio is sufficient for my cases).
I then noticed that whether or not the rate changed works is (apparently) up to the type of the file I am opening: When I open a local ogg file, the playback speed changes, when I open an mp3 instead, nothing happens.
Is this a bug in gstreamer, or do I need a more sophisticated pipeline in order to get the approach to work with different media types (local files would be sufficient for my needs)?
Edit: Complete code, sample mp3

Related

Using parallel decoder in libavcodec/ffmpeg

The problem
I am writing a simple program in C language which makes use of libavcodec (https://github.com/Dr-Noob/framepos/blob/master/framepos.c)
I have been working with h264 videos. While the decoding works, I can see that it's very slow, since it only uses 1 CPU core (I checked it with top). On the other hand, I know that ffmpeg, which uses the same libavocdec installed in my system, makes use of the parallel h264 decoder. I can test it with:
ffmpeg -c:v h264 -i test.mkv -f null -
With top I can see that it's running in parallel, and the speed is noticeably faster. I would like a solution that always gives me the possibility of decoding the video using all the CPU cores, not only in the case of the h264 codec.
My research so far
Looking at the ffmpeg code, one can see that, to obtain the AVCodec, it uses the function find_codec_or_die. This will end up using avcodec_find_decoder_by_name. Actually, if in my program I use this function, asking for the h264 decoder, I still get the sequential version. Moreover, using gdb in ffmpeg I have seen that the AVCodec in ffmpeg is called ff_h264_decoder, while in my code, gdb does not know which specific type is the codec. The ff suffix makes me think that this one is the parallel decoder (because it looks like ff has something to do with parallel in the ffmpeg context (https://ffmpeg.org/doxygen/2.7/pthread__frame_8c.html)). However, it seems that I am unable to get this codec.
What can I do to decode video in parallel using libavcodec in C?
Posting gkv311 comment as an answer for future reference.
AVCodec does not have multi-threading functionality. It is stored inside AVCodecContex. So, a posible scheme to run the codec in parallel:
AVCodec *codec = avcodec_find_decoder
AVCodecContext *ctx = avcodec_alloc_context3
ctx->thread_count = n_threads;
ctx->thread_type = FF_THREAD_FRAME;
avcodec_open2(ctx, fmt_ctx->video_codec, NULL)

How to execute a ffmpeg code on a GPU without using the command line?

We have written a short code in C code to read a video file, using common libraries as libavcodec, libavformat, etc.
The code is running smoothly but only using the CPU resources. We'd need to run the code on the GPU (Nvidia GeForce 940MX and 1080Ti). Is there a way to force the code to be run on the GPU?
While using the command line (e.g., ffmpeg -hwaccel cuvid -i vid.mp4 out.avi) things are fine, we are not able to have it working on the GPU from the source code.
We are working with Ubuntu 18.04, and ffmpeg correctly compiled with CUDA 9.2
There are pretty good examples for using libav (ffmpeg) for encoding and decoding video at https://github.com/FFmpeg/FFmpeg/tree/master/doc/examples.
For what you need is demuxing_decoding.c example and change the lines 166 which is:
/* find decoder for the stream */
dec = avcodec_find_decoder(st->codecpar->codec_id);
with
/* find decoder for the stream */
if (st->codecpar->codec_id == AV_CODEC_ID_H264)
{
dec = avcodec_find_decoder_by_name("h264_cuvid");
}
else if (st->codecpar->codec_id == AV_CODEC_ID_HEVC)
{
dec = avcodec_find_decoder_by_name("hevc_cuvid");
}
else
{
dec = avcodec_find_decoder(st->codecpar->codec_id);
}
add/change lines for other formats. And make sure your FFmpeg compiled with --enable-cuda --enable-cuvid
In my tests I got error comes from line 85: because nvdec (hevc_cuvid) uses p010 internal format for 10bit (input is yuv420p10). Which means decoded frame will be either NV12 pixel format or P010 depending on bit depth. I hope you are familiar with pixel formats.
Hope that helps.

libsox: record from default microphone

I need to open the default audio capture device and start recording. libsox seems to be a nice cross-platform solution. Using the binary frontend, I can just rec test.wav and the default microphone is activated.
However, when browsing the documentation, no similar functionality exists. This thread discusses precisely the same topic as my question, but doesn't seem to have reached a solution.
Where could an example of using libsox for recording from the default audio device be located?
You can record using libsox. Just set the input file to "default" and set the filetype to the audio driver (e.g. coreaudio on mac, alsa or oss on linux)
const char* audio_driver = "alsa";
sox_format_t* input = sox_open_read("default", NULL, NULL, audio_driver);
Look at some examples for more info on how to structure the rest of the code.
You need to record with alsa first and use libsox for the right format. libsox is not for recording. see example: https://gist.github.com/albanpeignier/104902

How to play .m4s file given in mpd of MPEG-DASH on player?

I have downloaded the MPDs "http://dash.edgesuite.net/adobe/hdworld_dash/HDWorld.mpd" and all related .m4s files.
I tried running it on VLC player. But the format is not recognized by VLC player.
I have downloaded this media segment using wget (1 to 14 segments are available)
http://dash.edgesuite.net/adobe/hdworld_dash/hdworld_seg_hdworld_0696kbps_ffmpeg.mp4.video_temp2.m4s.
Can anybody tell me solution how to run .m4s format file on player?
System: Ubuntu 11.10
You need the initialization segment. It is often named "00" or "init" or doesn't have a sequence number like the other files, and often ends in ".mp4" rather than ".m4s". Then you just concatenate the files together. You can start anywhere in the sequence so long as you begin with the initialization segment.
For example
cat init.mp4 *.m4s > output.mp4
Then you have a playable mp4 file with content, assuming there is no encryption (DRM) applied to it.
.m4s file format is ISO Base Media File. i.e. MPEG-4 Part 14. read specs for more info you may get m4s player for windows. As far as I know on Linux platform GPAC will help. You can create your own MPD from any media source using MP4Box a GPAC tool.
You can use MP4Client for playing your DASHed Media from MPD. Actually .m4s's separate segment is not able to play by its own bcoz player should know Codec and mime type to play any media and m4s is not supported by any player, i.e. it has its own header and data (moof & mdat).
For playing MPD which contains many m4s segment (you can make your own MPD or download each audio and video segment separately from any MPD & put it in to a same folder):
install GPAC.
$MP4Client MYWorld.mpd will open Osmo4 player and you can see your video is playing. Enjoy..
FYI, local streaming server can also play this video:
$MP4Client http://localhost/MYWorld.mpd
if not working change segmentAlignment flag, i.e. <AdaptationSet segmentAlignment="true" subsegmentAlignment="true">.
you can play it using GPAC player, installing it with all the third party codecs also -
http://gpac.wp.mines-telecom.fr/player/
some ppl claim that they are able use vlc, i have not tested it.
Try this in the OSX terminal:
open -a Osmo4 example.mpd
It works for me.

Remux MPEG TS -> RTP MPEG ES

Please guide me to achieve the following result in my program (written in C):
I have a stream source as HTTP MPEG TS stream (codecs h264 & aac), It has 1 video and 1 audio substream.
I need to get MPEG ES frames (of same codecs), to send them via RTP to
RTSP clients. It'll be best if libavformat give frames with RTP
header.
MPEG ES is needed, because, as i know, media players on Blackberry
phones do not play TS (i tried it).
Although, i appreciate if anyone point me some another format, easier to get
in this situation, that can hold h264 & aac, and plays well on
blackberry and other phones.
I've already succeed with other task to open the stream and remux to
FLV container.
Tried to open two output format contexts with "rtp" formats, also got
frames. Sent to client. No success.
I've also tried writing frames to "m4v" AVFormatContext, have got
frames, have cut them by NAL, added RTP header before each frame, and sent to client. Client displays 1st frame and hangs, or plays a second of video+audio (faster than needed) each 10 seconds or more.
In VLC player log i have this: http://pastebin.com/NQ3htvFi
I've scaled timestamps to make them start with 0 for simplicity.
I compared it with what VLC (or Wowza, sorry i dont remember) incremented audio TS by 1024, not 1920, so i did additional linear scaling to be similar to other streamers.
Packet dump of playback of bigbuckbunny_450.mp4 is here:
ftp://rtb.org.ua/tmp/output_my_bbb_450.log
BTW in both cases i've hardly copied SDP from Wowza or VLC.
What is the right way to get what i need?
I'm also interested if there's some library similar to
libavformat? Maybe even in embryo state.

Resources