OpenMAX, Raspberry PI: Get Video Dimensions of H264 - c

is there any way to get the video dimensions of a H264 video on the raspberry pi using OpenMAX directly without having to use ffmpeg or something else? All the pi examples appear to have hardcoded values for that.
Thanks!

Yes, this is possible by querying the OMX_PARAM_PORTDEFINITIONTYPE structure of the decoder output port. You have to use something along these lines:
OMX_PARAM_PORTDEFINITIONTYPE portdef;
portdef.nSize = sizeof(OMX_PARAM_PORTDEFINITIONTYPE);
portdef.nVersion.nVersion = OMX_VERSION;
portdef.nPortIndex = 131;
OMX_GetParameter(ILC_GET_HANDLE(video_decode), OMX_IndexParamPortDefinitionType, portdef);
printf("Width: %d, Height: %d\n", portdef.format.video.nFrameWidth, portdef.format.nFrameHeight);
Please note that this will only give you correct values after the OMX_EventPortSettingsChanged event has fired (which happens after processing the first buffer). Otherwise, this values can and probably will be wrong.

Related

aac encoding with ffmpeg result in super short file

So I'm having problem with AAC encoding. I'm trying to encode some synthetic sound waves, but it does not work as expected. The file I get gives just a super short sound when played in VLC. When I play is in ffplay is plays like I expect, but it says "duration: 00:00:00.05" which I suppose says 5 ms. But I encode a lot more than that, and it played more. So VLC plays a super short sound, ffplay plays a longer file (the expected length), but displays it with super short duration, what's going on?
Source: http://pastebin.com/M5MKkEL3
One of the things that looks wrong to me is this:
If you look for the variable "audio_time", if you breakpoint it and read it every encode frame, you will get this:
..
Encode frame 8: 0.00010416666666666666
Encode frame 9: 0.00012500000000000000
(and so on)
The diff is: ~0.00002085, which is a 1/1000 of a the diff I expected from ~47 samples frames per second, which is what the encoder wants with 48k sample rate (48k / 1024 = ~47).
So why do I get a thousand of the expected data encoded?
Feel free to point anything suspicious out!
Thanks in advance!
So I found it, in line 75, this was missing:
time_base.num = per_frame_audio_samples;
time_base.den = aud_codec_context->sample_rate;
aud_codec_context->time_base = time_base;
per_frame_audio_samples = 1024 in my case.

arDetectMarker + pixel format + segmentation fault

I am trying to use the arDetectMarker function from arToolKit to detect markers in an image. I read the image from disk in the following way:
cv::Mat image;
cv::Mat temp;
image = cv::imread(path, CV_LOAD_IMAGE_COLOR);
cv::cvtColor(image, temp, CV_RGB2BGR);
and converted to ARUint8* format using:
dataPtr = (ARUint8 *) ((IplImage) temp).imageData;
I am sure that the data is correctly converted to dataPtr since I saved the image to check. Unfortunately, when I call arDetectMarker, a "segmentation fault" happens and I don't know the reason (I think it is due to the pixel format). I've read in the documentation:
http://artoolkit.sourceforge.net/apidoc/ar_8h.html#b2868d9587c68fb7255d4f270bcf878f
and it says that the format is in general ABGR. But I am using Ubuntu 14.04 and I think that I have v4l drivers, although I am not sure since I am not working with videos. I tried to convert the image loaded to ABGR or BGRA, but I am not sure if I did it correctly, or if this is really a requirement.
Also, I did the calibration procedure before.
Anybody can help me?
Thanks!
Marcelo.

both video & audio run as fast as possible - ffmpeg

using this code example (dranger - ffmpeg):
https://github.com/arashafiei/dranger-ffmpeg-tuto/blob/master/tutorial03.c
and dranger tutorial for ffmpeg:
http://dranger.com/ffmpeg/tutorial02.html
The video runs as fast as possible but it makes sense because there is no timer and we just extract the frames as soon as we have them ready. But for some reason, the sound also runs as fast as possible even though he says that it shouldn't.
I'm using mac os x (Maybe that has something to do with it).
Any suggestions?
Try adding:
aCodecCtx = pFormatCtx->streams[audioStream]->codec;
> aCodecCtx->request_sample_fmt = AV_SAMPLE_FMT_S16;

setting video bit rate through ffmpeg API is ignored for libx264 Codec

I am transcoding a video using FFMPEG API in c code.
I am trying to set the video bit rate using the ffmpeg API as shown below:
ovCodecCtx->bit_rate = 100 * 1000;
The Encoder I am using is libx264.
But this parameter is not taken into effect and the resulting video quality is very bad.
I have even tried setting related parameters like rc_min_rate, rc_max_rate, etc.. but the video quality is still very low as these related parameters are not taken into effect.
Could any expert tell how one can set the bit rate correctly using the FFMPEG API?
Thanks
I have found the solution to my problem. In fact somebody who was facing the same problem has posted the solution in ffmpeg(libav) user forum. This seems to work in my case too. I am posting the answer to my own question so that other users facing similar issue might benefit from this post.
Problem:
Setting the Video Bit Rate programmatically for the H264 Video Codec was not honoured by the libx264 Codec. Even though it was working for MPEG1, 2 and MPEG4 video codecs, this setting was not recognised for H264 Video Codec. And the resulting video quality was very bad.
Solution:
We need to set the pts for the decoded/resized frames before they are fed to encoder.
The person who found the solution has gone through ffmpeg.c source and was able to figure this out. We need to first rescale the AVFrame's pts from the stream's time_base to the codec time_base to get a simple frame number (e.g. 1, 2, 3).
pic->pts = av_rescale_q(pic->pts, ost->time_base, ovCodecCtx->time_base);
avcodec_encode_video2(ovCodecCtx, &newpkt, pic, &got_packet_ptr);
And when we receive back the encoded packet from the libx264 codec, we need to rescale the pts and dts of the encoded video packet to the stream time base
newpkt.pts = av_rescale_q(newpkt.pts, ovCodecCtx->time_base, ost->time_base);
newpkt.dts = av_rescale_q(newpkt.dts, ovCodecCtx->time_base, ost->time_base);
Thanks

dm365 mpeg4 encoder P-Frames

I am implementing the operation of encoding video with TI DM365 mpeg4 encoder and containerizing it with ffmpeg mp4 container using a dummy FMP4 codec to produce headers and footers. While the container is proven to be working correctly using similar Intel based mpeg4 encoder, the dm365 gives a mosaic result if P frames are used at all. Using only I frames works, but I would like to minimize amount of data stored.
The example of the result can be viewed here. Settings are 1-Iframe, 9-Pframes
TI developers didn't answer my question regarding this in 2 days, so I am trying to get help here.
This may help, a TI data sheet on the various settings/parameters and their effect. Apologies if it is telling you stuff you already know...
TI Data Sheet spraba9.pdf

Resources