How to convert MP4 (h264/aac) file to F4F fragments for HDS (Adobe) - c

I am looking for some input on how to programmatically convert mp4 files to fragmented f4f files with accompanying manifests.
I currently have an implementation for creating segmented MPEG2-TS files with accompanying manifest for Apples HLS, and want to create a similar piece of software for Adobes HDS.
My code is based on Libav (alternatively, ffmpeg), so I was hoping they had native support for muxing f4f files, but I have not been able to find any resources for it.
What I am specifically looking for:
How (if) the format is used in libav?
If there is any special requirements (such as the h264_mp4toannexb filter required for converting MP4 to MPEG2 TS)?
Any sample code (even if it's not using libav/ffmpeg)
An easy-to-read manifest specification.

I'm afraid you have to read mp4/f4f specification, and implementation it your self.
MP4 file format: ISO/IEC 14496-14
f4f file format: It is included in the f4v specification.(http://www.adobe.com/cn/devnet/f4v.html)
The code of mod_h264_streaming (http://h264.code-shop.com/trac) may be helpful.

Related

Which is the "correct" content-type for FLAC?

Some software uses audio/flac. Some uses audio/x-flac.
MDN suggests that x-flac is "non-standard". But based on what?
But this appears to be the official registry for audio/ types... and audio/flac doesn't appear on it. Has nobody ever registered flac there? Whyever not?
In 2021, what is the correct place to determine the list of "standard" content-types, and what is it for FLAC?
In 2022 Mozilla seems to have the answer:
The Free Lossless Audio Codec (FLAC) is a lossless audio codec; there
is also an associated simple container format, also called FLAC, that
can contain this audio. The format is not encumbered by any patents,
so its use is safe from interference. FLAC files can only contain FLAC
audio data.
audio/flac
audio/x-flac (non-standard)
So, depending on that I go with audio/flac. It seems to work for me on a Grav website.

ffmpeg - missing avformat headers

I follow the steps here to compile FFmpeg.
And there is no problem. Its working well. But i did not understand something.
There are two folders under my home directory.
--ffmpeg_sources
--ffmpeg_build
insede of ffmpeg_sources/libavformat i have number of headers
aiff.h
apetag.h
argo_asf.h
asfcrypt.h
asf.h
ast.h
av1.h
avc.h
avformat.h
avi.h
avio.h
avio_internal.h
avlanguage.h
ogg.h
...
but ffmpeg_build/avformat has 3 header.
avformat.h
avio.h
version.h
btw this is my usr/include/x86_64-linux-gnu/libavformat
avformat.h
avio.h
version.h
Why aren't all headers in these other two files?
For ex: i want to use "ogg_read_packet" but when i try to include <libavformat/oggdec.h> i get cannot open source file "libavformat/oggdec.h"C/C++(1696) error.
Building and using the library aren't the same things.
Have a look at libavformat/oggdec.h and libavformat/oggdec.c. You should have realized, that there is no way to directly use the ogg_read_packet function.
there is no declaration in the header file
the function is declared static in the source file
If you want to encode/decode with a specific codec (here ogg), you have to find an encoder (avcodec_find_encoder or avcodec_find_encoder_by_name) or a decoder (avcodec_find_decoder or avcodec_find_decoder_by_name) and link it to a AVCodecContext via avcodec_open2.
Then for encoding use the 'encode' functions described here and for decoding the 'decode' functions described here.
For more info:
FFmpeg Documentation
FFmpeg Examples
In short, use the public Interface. Only 'God' knows the internals of FFmpeg.

How Do I Get the Suggested File Extension for a Document Format?

The OWL API supports many different output document formats. I would like to give the user a choice of which format to use, but each format should have a different file extension, such as .ttl for Turtle and .rdf for RDF. Does the API provide a way to get a suggested file extension for a given format?
If there isn't a way, I wish there were!
There is nothing to do this at the moment. I have opened an issue for it:
https://github.com/owlcs/owlapi/issues/346
Edit: There's now an Extensions enum that links (some of) the format classes to (some of) the most common file extensions.
Usage: `Iterable formats=Extensions.getCommonExtensions(RDFXMLDocumentFormat.class);'
This is available in the master, version4 and version5 branches. Will be available in the next releases of OWL API.

Programmatically capture X11 region with ffmpeg in C/whatever

There is an input format option to ffmpeg -- x11grab which allows one to capture specified region and output it to file/stream. I'm trying to do the same thing programmatically, but i haven't found any non-basic tutorials/reference for ffmpeg API.
I wonder, how it is possible to open x11 region with avformat_input_file or something like this. Or should i do it with XCopyArea/etc?
(Any programming language will satisfy)
There are many applications that take a screenshot. Major hint: it's open source, use the source. If you can't find the code in ffmpeg, any example application will do:
http://git.gnome.org/browse/gnome-screenshot/tree/src/screenshot-utils.c#n425
This is gnome-screenshot source code. This example uses gdk_get_default_root_window().

Powerflex Database File extensions

I am trying to understand the different file extensions for the pfxplus powerflex database. Could someone please help telling me briefly what each file is for?
.k1
.k2
.k3
...
.k13
.k14
.k15
.fd
.def
.hdr
.prc
.pc3
Data files:
OK, so .dat is the data file.
.k1 -> .k15 are index files.
These are the critical data files for runtime. (Combined with filelist.cfg or pffiles.tab similar to define what files are available overall).
.fd is the file definition, needed for compiling programs
.tag (which you did not mention) is needed only if you need to access field names at run time (such as using a generic report tool)
.def is the file definition in human readable form, and is not needed by any process but is produced so a programmer or user can understand the file structure.
Run time:
The .ptc files are the compiled threads interpreted by the powerflex runtime.
The .prc file is a resource file that is used at runtime in conjunction with the .ptc file - it defines how a character based program is to look in a gui environment in "g-mode". It was the cheap way to upgrade character based programs when windows first started getting popular usage.
.hdr and .pc3 escape me at the moment, but are vaguely familiar - .hdr is probably another data file used with compression or special field types for later versions of pfxplus. .pc3 may in fact be the .ptc files...

Resources