I have written some C code that takes an mp4 file with h264-encoded video and AAC-encoded audio and writes it to segmented .ts files.
The code can be seen here: http://pastebin.com/JVdgjM9G
The problem is that the code chokes on audio packets. Because I am converting from h264, I have to use the "h264_mp4toannexb" which I finally got working for video frames. However, as soon as the program reaches the first audio packet (stream 1 below) it crashes.
Sample output:
Output #0, mpegts, to 'testvideo':
Stream #0.0: Video: libx264, yuv420p, 1280x720, q=2-31, 1416 kb/s, 90k tbn, 23.98 tbc
Stream #0.1: Audio: libfaac, 48000 Hz, stereo, 127 kb/s
First chunk: testvideo-00001.ts
Read frame, keyframe: 1, index: 0
Did bitfilter fun!
Read frame, keyframe: 0, index: 0
Did bitfilter fun!
(...this repeats several more times, truncated for space...)
Did bitfilter fun!
Read frame, keyframe: 0, index: 0
Did bitfilter fun!
Read frame, keyframe: 1, index: 1
base(54516) malloc: *** error for object 0x7fd2db404520: pointer being freed was not allocated
*** set a breakpoint in malloc_error_break to debug
Abort trap: 6
I tried changing the code to also run the filter on the audio stream (using audio_stream->codec instead of video_stream->codec), but that simply just gives an error from the filter.
The problem happens when I try to call av_interleaved_write_frame(output_context, &packet);- for the filtered video packets, there is no problem but the audio packet it completely chokes on. I am kind of stumped on why though, so any help is appreciated.
It turns out the av_free_packet call after the bitfilter manipulation was actually releasing the video packets. Removing that call caused the code to run correctly!
Related
I'm trying to run ORB-SLAM3 and I keep getting the following error messages:
ORB-SLAM3 Copyright (C) 2017-2020 Carlos Campos, Richard Elvira, Juan J. Gómez, José M.M. Montiel and Juan D. Tardós, University of Zaragoza.
ORB-SLAM2 Copyright (C) 2014-2016 Raúl Mur-Artal, José M.M. Montiel and Juan D. Tardós, University of Zaragoza.
This program comes with ABSOLUTELY NO WARRANTY;
This is free software, and you are welcome to redistribute it
under certain conditions. See LICENSE.txt.
Input sensor was set to: Stereo
Loading ORB Vocabulary. This could take a while...
Vocabulary loaded!
Creation of new map with id: 0
Creation of new map with last KF id: 0
Seq. Name:
Camera Parameters:
fx: 435.20468139648438
fy: 435.20468139648438
cx: 367.45172119140625
cy: 252.20085144042969
bf: 47.906394958496094
k1: 0
k2: 0
p1: 0
p2: 0
fps: 20
color order: RGB (ignored if grayscale)
ORB Extractor Parameters:
Number of Features: 1200
Scale Levels: 8
Scale Factor: 1.2000000476837158
Initial Fast Threshold: 20
Minimum Fast Threshold: 7
Depth Threshold (Close/Far Points): 3.8527247905731201
Failed to load image at: /home/yz2qmq/ORB_SLAM3/V1_01_easy/MH01/mav0/cam0/data/1403636579763555584.png
./euroc_eval_examples.sh: line 16: 20460 Segmentation fault (core dumped) ./Examples/Stereo/stereo_euroc ./Vocabulary/ORBvoc.txt ./Examples/Stereo/EuRoC.yaml "$pathDatasetEuroc"/MH01 ./Examples/Stereo/EuRoC_TimeStamps/MH01.txt "$pathDatasetEuroc"/MH02 ./Examples/Stereo/EuRoC_TimeStamps/MH02.txt "$pathDatasetEuroc"/MH03 ./Examples/Stereo/EuRoC_TimeStamps/MH03.txt "$pathDatasetEuroc"/MH04 ./Examples/Stereo/EuRoC_TimeStamps/MH04.txt "$pathDatasetEuroc"/MH05 ./Examples/Stereo/EuRoC_TimeStamps/MH05.txt dataset-MH01_to_MH05_stereo
Evaluation of MAchine Hall trajectory with Stereo sensor
Traceback (most recent call last):
File "evaluation/evaluate_ate_scale.py", line 151, in
second_list = associate.read_file_list(args.second_file, False)
File "/home/yz2qmq/ORB_SLAM3/evaluation/associate.py", line 64, in read_file_list
file = open(filename)
IOError: [Errno 2] No such file or directory: 'f_dataset-MH01_to_MH05_stereo.txt'
Launching V102 with Monocular-Inertial sensor
num_seq = 1
file name: dataset-V102_monoi
Loading images for sequence 0...LOADED!
I made the changes required in the euroc_eval_examples.sh and change it by the dataset path!
Can someone help me with this issue?
Had the Same prob here.
I just had the path and its syntax all messed UP.
Try to run the whole thing with this command:
./Examples/Monocular/mono_euroc ./Vocabulary/ORBvoc.txt ./Examples/Monocular/EuRoC.yaml "$pathDatasetEuroc"/MH01 ./Examples/Monocular/EuRoC_TimeStamps/MH01.txt dataset-MH01_mono
And where it says "$pathDatasetEuroc" make sure you put the right path before MH01, in which there should only be the mav0 folder.
In your case should be /home/yz2qmq/ORB_SLAM3/V1_01_easy
----> ./Examples/Monocular/mono_euroc ./Vocabulary/ORBvoc.txt ./Examples/Monocular/EuRoC.yaml /home/yz2qmq/ORB_SLAM3/V1_01_easy/MH01 ./Examples/Monocular/EuRoC_TimeStamps/MH01.txt dataset-MH01_mono
That at least worked for me.
It is exactly the same thing as running ./euroc_examples.sh, and you will realize it easily if you open and read briefly euroc_examples.sh.
(This btw is a guide which actually worked for me only after following it blindly :( https://www.ybliu.com/2020/07/ORB-SLAM3-demo.html)
We have written a short code in C code to read a video file, using common libraries as libavcodec, libavformat, etc.
The code is running smoothly but only using the CPU resources. We'd need to run the code on the GPU (Nvidia GeForce 940MX and 1080Ti). Is there a way to force the code to be run on the GPU?
While using the command line (e.g., ffmpeg -hwaccel cuvid -i vid.mp4 out.avi) things are fine, we are not able to have it working on the GPU from the source code.
We are working with Ubuntu 18.04, and ffmpeg correctly compiled with CUDA 9.2
There are pretty good examples for using libav (ffmpeg) for encoding and decoding video at https://github.com/FFmpeg/FFmpeg/tree/master/doc/examples.
For what you need is demuxing_decoding.c example and change the lines 166 which is:
/* find decoder for the stream */
dec = avcodec_find_decoder(st->codecpar->codec_id);
with
/* find decoder for the stream */
if (st->codecpar->codec_id == AV_CODEC_ID_H264)
{
dec = avcodec_find_decoder_by_name("h264_cuvid");
}
else if (st->codecpar->codec_id == AV_CODEC_ID_HEVC)
{
dec = avcodec_find_decoder_by_name("hevc_cuvid");
}
else
{
dec = avcodec_find_decoder(st->codecpar->codec_id);
}
add/change lines for other formats. And make sure your FFmpeg compiled with --enable-cuda --enable-cuvid
In my tests I got error comes from line 85: because nvdec (hevc_cuvid) uses p010 internal format for 10bit (input is yuv420p10). Which means decoded frame will be either NV12 pixel format or P010 depending on bit depth. I hope you are familiar with pixel formats.
Hope that helps.
So I'm having problem with AAC encoding. I'm trying to encode some synthetic sound waves, but it does not work as expected. The file I get gives just a super short sound when played in VLC. When I play is in ffplay is plays like I expect, but it says "duration: 00:00:00.05" which I suppose says 5 ms. But I encode a lot more than that, and it played more. So VLC plays a super short sound, ffplay plays a longer file (the expected length), but displays it with super short duration, what's going on?
Source: http://pastebin.com/M5MKkEL3
One of the things that looks wrong to me is this:
If you look for the variable "audio_time", if you breakpoint it and read it every encode frame, you will get this:
..
Encode frame 8: 0.00010416666666666666
Encode frame 9: 0.00012500000000000000
(and so on)
The diff is: ~0.00002085, which is a 1/1000 of a the diff I expected from ~47 samples frames per second, which is what the encoder wants with 48k sample rate (48k / 1024 = ~47).
So why do I get a thousand of the expected data encoded?
Feel free to point anything suspicious out!
Thanks in advance!
So I found it, in line 75, this was missing:
time_base.num = per_frame_audio_samples;
time_base.den = aud_codec_context->sample_rate;
aud_codec_context->time_base = time_base;
per_frame_audio_samples = 1024 in my case.
I am transcoding a video using FFMPEG API in c code.
I am trying to set the video bit rate using the ffmpeg API as shown below:
ovCodecCtx->bit_rate = 100 * 1000;
The Encoder I am using is libx264.
But this parameter is not taken into effect and the resulting video quality is very bad.
I have even tried setting related parameters like rc_min_rate, rc_max_rate, etc.. but the video quality is still very low as these related parameters are not taken into effect.
Could any expert tell how one can set the bit rate correctly using the FFMPEG API?
Thanks
I have found the solution to my problem. In fact somebody who was facing the same problem has posted the solution in ffmpeg(libav) user forum. This seems to work in my case too. I am posting the answer to my own question so that other users facing similar issue might benefit from this post.
Problem:
Setting the Video Bit Rate programmatically for the H264 Video Codec was not honoured by the libx264 Codec. Even though it was working for MPEG1, 2 and MPEG4 video codecs, this setting was not recognised for H264 Video Codec. And the resulting video quality was very bad.
Solution:
We need to set the pts for the decoded/resized frames before they are fed to encoder.
The person who found the solution has gone through ffmpeg.c source and was able to figure this out. We need to first rescale the AVFrame's pts from the stream's time_base to the codec time_base to get a simple frame number (e.g. 1, 2, 3).
pic->pts = av_rescale_q(pic->pts, ost->time_base, ovCodecCtx->time_base);
avcodec_encode_video2(ovCodecCtx, &newpkt, pic, &got_packet_ptr);
And when we receive back the encoded packet from the libx264 codec, we need to rescale the pts and dts of the encoded video packet to the stream time base
newpkt.pts = av_rescale_q(newpkt.pts, ovCodecCtx->time_base, ost->time_base);
newpkt.dts = av_rescale_q(newpkt.dts, ovCodecCtx->time_base, ost->time_base);
Thanks
Please guide me to achieve the following result in my program (written in C):
I have a stream source as HTTP MPEG TS stream (codecs h264 & aac), It has 1 video and 1 audio substream.
I need to get MPEG ES frames (of same codecs), to send them via RTP to
RTSP clients. It'll be best if libavformat give frames with RTP
header.
MPEG ES is needed, because, as i know, media players on Blackberry
phones do not play TS (i tried it).
Although, i appreciate if anyone point me some another format, easier to get
in this situation, that can hold h264 & aac, and plays well on
blackberry and other phones.
I've already succeed with other task to open the stream and remux to
FLV container.
Tried to open two output format contexts with "rtp" formats, also got
frames. Sent to client. No success.
I've also tried writing frames to "m4v" AVFormatContext, have got
frames, have cut them by NAL, added RTP header before each frame, and sent to client. Client displays 1st frame and hangs, or plays a second of video+audio (faster than needed) each 10 seconds or more.
In VLC player log i have this: http://pastebin.com/NQ3htvFi
I've scaled timestamps to make them start with 0 for simplicity.
I compared it with what VLC (or Wowza, sorry i dont remember) incremented audio TS by 1024, not 1920, so i did additional linear scaling to be similar to other streamers.
Packet dump of playback of bigbuckbunny_450.mp4 is here:
ftp://rtb.org.ua/tmp/output_my_bbb_450.log
BTW in both cases i've hardly copied SDP from Wowza or VLC.
What is the right way to get what i need?
I'm also interested if there's some library similar to
libavformat? Maybe even in embryo state.