Where to find specification of the jvet/HM and/or the x265 H.265/HEVC encoders? - video-processing

I'm interested in implementations of H.265/HEVC encoders, which can be used in practice. Since the H.265/HEVC standard only defines the decoder, finding good syntax elements is the job of the encoder. However, I guess finding the optimal syntax elements is infeasible, and thus, I assume that it is up to the designer of the encoder to develop Heuristics in order to determine good enough syntax elements within a reasonable time.
To get an idea how such Heuristics could look like, I had a look at two open source encoders:
jvet/HM: https://vcgit.hhi.fraunhofer.de/jvet/HM
x265 HEVC Encoder: https://bitbucket.org/multicoreware/x265_git/src/master/
However, I cannot find any document that is specifying either of both encoders. By just looking at the source code, it is very hard for me to understand how the Heuristics work.
Does by chance anyone know where I can find these specifications? And/or does anyone know a reasonable implementation of an HEVC encoder with a proper specification?
Thanks a lot!
[I had a look at the repositories jvet/HM (https://vcgit.hhi.fraunhofer.de/jvet/HM) and x265 HEVC Encoder (https://bitbucket.org/multicoreware/x265_git/src/master/) but could not find a document that is specifying the used algorithms. I tried to find other open source encoders but was not successful.]

Related

How to filter specific frequencies from an Audio file in C

After searching on various search engines, and also here, there is very little information applicable to my situation.
Basically I want to make a program in C that does the following:
Open an Audio File (flac Mp3 and wav, to represent a bit of variety)
Filter and cut out a specific set of frequencies (for Example 4000-5200hz, the frequencies should be entered upon inquiry)
Save the new file (without the filtered frequencies) in the same format as the input file.
Things that would be of interest to me:
Open-Source examples of software that does the same or a similar thing, preferably in C
ANY literature on audio programming in C
Explanations on how the different formats are structured, any sources appreciated
Ps.: I apologise if some parts of the question can be easily googled, but I tried, and there wasn't anything that described this well in detail.
Thanks a lot!!
Answers:
FFmpeg does a lot of audio slicing and dicing, and it's written in pure C. It's pretty big, though, and might be difficult to digest in one go.
"Audio programming" is a bit vague. But from the rest of your question, it sounds like you want to open an audio file from disk, apply some transformations to the audio, and write the data to a new file. (Other areas under the "audio programming" umbrella would include accessing platform-specific APIs to read from a microphone and write audio to an output device).
Broad topic again, but we'll start simple.
I suggest getting (or generating) a .WAV file to start with. WAV files are probably the simplest audio files to read and write manually. Here is a page that describes what you need to know about the WAV format.
Pulse code modulation (PCM) is the simplest audio format to work with since you don't need to worry about decompressing it first. Here is a page (that I wrote) describing different PCM formats.
As for filtering and cutting different frequencies, I think what you're looking for would be low-pass, high-pass, or band-pass filters.
I hope that helps you get started. Ask more questions here on Stack Overflow as needed.

c99 dynamic array

I'm writing a very small, project-specific OpenGLES engine for iphone and I really need to use a good, solid, and proven dynamic array library/macro in c99 dialect. (No C++, Obj-C, stl whatsoever)
It's strongly necessary for render batch and polygon mesh, so it should be able to handle various types of data, and additionally causes minimal overhead when array size changes and new data is inserted.
I've been searching around and found two candidates for my need.
the first one is from ccCArray from Cocos2d.
and another one is utarray written by Troy D. Hanson.
ccCArray IS rock solid, thoroughly proven by community. utarray looks fine but I cannot find anyone actually uses it.
Any more suggestion?
A library ?! A C++ template would be more than suitable for this need. I'd say about AT MOST 15 functions (excluding alternative constructors and const getters), and you're done. Also able to use it for ANY type, ANY size and ANY size type (byte, int etc.) And it's just one file: a .h or, better said, a .hpp
Any reason you're rejecting it ? Seems like you want to make life harder for yourself :)

TrueType Font Parsing in C

I want to read a ttf and draw a text with that font on a buffer. Even though there are libraries such as freetype, gd to do this task, I want to write my own code. Can you advice me on how to accomplish this task?
Unless you're one of the world's top experts on fonts, typography, and writing systems, the answer is simple: DON'T. TrueType/OpenType has a lot of tables you need to support for correct rendering, and even when using FreeType (which is an extremely low-level library), most people get it wrong.
If you need to do low-level, deterministic-across-platforms font handling, then at the very least, you should be using FreeType and libotf. This will provide you with access to the glyphs and outlines which you can then render however you like. In most cases though using your GUI system's text rendering routines will be a lot easier and less error-prone.
Finally, if you insist on ignoring my advice, a good RTFS on FreeType and Microsoft's online resources explaining the tables in TrueType/OpenType fonts are probably the best place to get started.
I would suggest you
Read all the TTF docs you can find
Find all the open source TTF parsers + renderers you can find, in many different languages, such as Freetype (c/c++), Batik (java), and anything else you can google for. Also George Williams' fontforge will likely be very helpful to you on your journey.
Rip apart all the programs you collected in 1. and see how they work. See if you can make a tiny small example program to do something simple, like dump the list of points for the outline of the letter "I".
Work on your rasterization. Start with something very simple, like rasterizing the letter "l".
The problem with TTF is that there is not a simple file format, and freetype handles a lot of crazy details for you. However if you don't care about portability, and you already have a specific TTF file you want to render, and you only care about a small simple alphabet, like Latin or Cyrillic, you might be OK.
Also you might want to check out a list of TTF documentation I linked to from my little project https://github.com/donbright/font_to_svg/
Not impossible, for anyone else tempted to try. I was curious about doing it because I like the DIY graphics approach where I allocate some memory and write into it, then save as a jpg or png. I pirated a bitmap font from giflib but that's strictly 8x8 pixels.
A few links:
`http://stevehanov.ca/blog/index.php?id=143`
`https://www.google.com/search?q=ttf+parser+c&ie=utf-8&oe=utf-8`
as R.. wrote the same time as i did in my comment, i would not to suggest to build another TTF-parser by your own. If you are eager to learn this very "spannende" field of Computer Science I would recommend "The Art of Computer Programming" Vol 2 from Donald E. Knuth. (it is Metafont, not TTF, but proven to be correct:-)

Reverse Engineering File Formats using AI Techniques

This is to extend the question: Tools to help reverse engineer binary file formats
Are there any tools that are publicly available that uses clustering and/or data mining techniques to reverse engineer file formats?
For example, with the tool you would have a collection of files that have the same format and the output of the tool would be the generic structure?
If one had a truly efficient binary encoding format (ZIP files are an example), then the information content in each bit is high. Essentially, it will look like a perfect random number.
You can't infer anything from that without additional knowledge.
If the binary encoding isn't efficient, in theory, you have some faint chance of seeing structure. But this still sounds really hard; how do you even begin guessing where the boundaries of fields are?
The AI machine learning types will tell you, you can't learn anything unless you already "almost" know it. Often they succeed by encoding the the problem with problem-tokens that at least you can reason about.
I don't think you can do this without providing more information. Do you know anything about the file formats? Field sizes are always less than N bits? Only ASCII strings are encoded or vice versa?

Why should I use a human readable file format?

Why should I use a human readable file format in preference to a binary one? Is there ever a situation when this isn't the case?
EDIT:
I did have this as an explanation when initially posting the question, but it's not so relevant now:
When answering this question I wanted to refer the asker to a standard SO answer on why using a human readable file format is a good idea. Then I searched for one and couldn't find one. So here's the question
It depends
The right answer is it depends. If you are writing audio/video data for instance, if you crowbar it into a human readable format, it won't be very readable! And word documents are the classic example where people have wished they were human readable, so more flexible, and by moving to XML MS are going that way.
Much more important than binary or text is a standard or not a standard. If you use a standard format, then chances are you and the next guy won't have to write a parser, and that's a win for everyone.
Following this are some opinionated reasons why you might want to choose one over the other, if you have to write your own format (and parser).
Why use human readable?
The next guy. Consider the maintaining developer looking at your code 30 years or six months from now. Yes, he should have the source code. Yes he should have the documents and the comments. But he quite likely won't. And having been that guy, and had to rescue or convert old, extremely, valuable data, I'll thank you for for making it something I can just look at and understand.
Let me read AND WRITE it with my own tools. If I'm an emacs user I can use that. Or Vim, or notepad or ... Even if you've created great tools or libraries, they might not run on my platform, or even run at all any more. Also, I can then create new data with my tools.
The tax isn't that big - storage is free. Nearly always disc space is free. And if it isn't you'll know. Don't worry about a few angle brackets or commas, usually it won't make that much difference. Premature optimisation is the root of all evil. And if you are really worried just use a standard compression tool, and then you have a small human readable format - anyone can run unzip.
The tax isn't that big - computers are quick. It might be a faster to parse binary. Until you need to add an extra column, or data type, or support both legacy and new files. (though this is mitigated with Protocol Buffers)
There are a lot of good formats out there. Even if you don't like XML. Try CSV. Or JSON. Or .properties. Or even XML. Lots of tools exist for parsing these already in lots of languages. And it only takes 5mins to write them again if mysteriously all the source code gets lost.
Diffs become easy. When you check in to version control it is much easier to see what has changed. And view it on the Web. Or your iPhone. Binary, you know something has changed, but you rely on the comments to tell you what.
Merges become easy. You still get questions on the web asking how to append one PDF to another. This doesn't happen with Text.
Easier to repair if corrupted. Try and repair a corrupt text document vs. a corrupt zip archive. Enough said.
Every language (and platform) can read or write it. Of course, binary is the native language for computers, so every language will support binary too. But a lot of the classic little tool scripting languages work a lot better with text data. I can't think of a language that works well with binary and not with text (assembler maybe) but not the other way round. And that means your programs can interact with other programs you haven't even thought of, or that were written 30 years before yours. There are reasons Unix was successful.
Why not, and use binary instead?
You might have a lot of data - terabytes maybe. And then a factor of 2 could really matter. But premature optimization is still the root of all evil. How about use a human one now, and convert later? It won't take much time.
Storage might be free but bandwidth isn't (Jon Skeet in comments). If you are throwing files around the network then size can really make a difference. Even bandwidth to and from disc can be a limiting factor.
Really performance intensive code. Binary can be seriously optimised. There is a reason databases don't normally have their own plain text format.
A binary format might be the standard. So use PNG, MP3 or MPEG. It makes the next guys job easier (for at least the next 10 years).
There are lots of good binary formats out there. Some are global standards for that type of data. Or might be a standard for hardware devices. Some are standard serialization frameworks. A great example is Google Protocol Buffers. Another example: Bencode
Easier to embed binary. Some data already is binary and you need to embed it. This works naturally in binary file formats, but looks ugly and is very inefficient in human readable ones, and usually stops them being human readable.
Deliberate obscurity. Sometimes you don't want it obvious what your data is doing. Encryption is better than accidental security through obscurity, but if you are encrypting you might as well make it binary and be done with it.
Debatable
Easier to parse. People have claimed that both text and binary are easier to parse. Now clearly the easiest to parse is when your language or library supports parsing, and this is true for some binary and some human readable formats, so doesn't really support either. Binary formats can clearly be chosen so they are easy to parse, but so can human readable (think CSV or fixed width) so I think this point is moot. Some binary formats can just be dumped into memory and used as is, so this could be said to be the easiest to parse, especially if numbers (not just strings are involved. However I think most people would argue human readable parsing is easier to debug, as it is easier to see what is going on in the debugger (slightly).
Easier to control. Yes, it is more likely someone will mangle text data in their editor, or will moan when one Unicode format works and another doesn't. With binary data that is less likely. However, people and hardware can still mangle binary data. And you can (and should) specify a text encoding for human-readable data, either flexible or fixed.
At the end of the day, I don't think either can really claim an advantage here.
Anything else
Are you sure you really want a file? Have you considered a database? :-)
Credits
A lot of this answer is merging together stuff other people wrote in other answers (you can see them there). And especially big thanks to Jon Skeet for his comments (both here and offline) for suggesting ways it could be improved.
It entirely depends on the situation.
Benefits of a human readable format:
You can read it in its "native" format
You can write it yourself, e.g. for unit tests - or even for real content, depending on what it's for
Probable benefits of a binary format:
Easier to parse (in terms of code)
Faster to parse
More efficient in terms of space
Easier to control (any time you need text in there, you can ensure it's UTF-8 encoded, and length prefixed etc)
Easier to include opaque binary data efficiently (images, etc - with a text format you'd be getting into base64)
Don't forget that you can always implement a binary format but produce tools to convert to/from a human-readable format as well. That's what the Protocol Buffers framework does - it's actually pretty rare IME to need to parse a text version of a protocol buffer, but it's really handy to be able to write it out as text.
EDIT: Just in case this ends up being an accepted answer, you should also bear in mind the point made by starblue: Human readable forms are much better for diffing. I suspect it would be feasible to design a binary format which is appropriate for diffing (and where a human-readable diff could be generated) but out-of-the-box support from existing diff tools will be better for text.
Version control is easier with text formats, because changes can easily be viewed and merged.
Especially MS-Word is giving us grief in this respect.
Open format -- no binary bit juggling
Readability :)
Interchange across platforms
Debugging aid
Easily parsed (and easily converted to any format)
One important point: you write a parser once, but read the output many times. That kind of tilts the balance in favor of HRF.
A major reason is that if someone needs to read the data say, 30 years from now, human readable format can be figured out. Binary is much more difficult.
If your have large data sets that are binary by nature (e.g. images), they obviously can't be stored in any other than binary form. But even then, the metadata could (and should!) be human-readable.
There's something called The Art of Unix Programming.
I won't say it's good or bad, but it's fairly famous. It has a whole chapter called Textuality in which the author asserts that human readable file format are an important part of the Unix way of programming.
They open the possibility to be created/edited with tools other than the original ones. New and better tools can be developed by others, integration into third party applications becomes possible. Think about binary iCal files, for example - would the format have been a success?
Apart from that: Human readable files improve the ability to debug or, for the savvy user, at least find the reason an error.
Pros for binary:
fast to parse
generally smaller data
easy to write a parser for
Pros for human readable:
easier to understand while reading - no "field X is set to 4 487 which means that the reactor should be shut down NOW"
if using something like XML easy to write a tool that will parse any file
I have had to deal with both types. If you are sending data and you want to keep it small binary is good. If you expect people to read it then human readable is good.
Human readable generally somewhat self documenting as well. And with binary it is bery easy to make mistakes - and hard to spot them.
Editable
Readable (duh!)
Printable
Notepad and vi enabled
Most importantly , their function can be decuded from the content (well mostly)
Because you are a human, and sooner or later you (or one of your customers) will be able to read the data.
We only use binary format if speed is an issue. And even then debugging is troublesome so we added a human readable equivalent.
Interoperability is the standard argument, i.e. a human-readable form is easier for developers of disparate systems to deal with so therefore confers some advantage.
Personally I think that is not true, and the performance benfits of binary files ought to beat that argument, especially if you publish your protocol. However the ubiquity of XML/HTTP based frameworks for machine interactions means that it is easier to adopt.
XML is way over-used.
Just a quick illustration where human-readable document format can be a better choice:
documents used for deploying application in production
We used to have our release notes in word format, but that release notes document had to be opened on various environment (Linux, Solaris) in pre-production and production plateform.
It also had to be parsed in order to extract various data.
In the end, we switched to a wiki-based syntax, still displayed nicely in HTML through a wiki, but still used as a simple text file in other situation.
As an adjuct to this, there are differing levels of human readability, and all are enhanced by using a good editor or viewer with code coloring, folding or navigation.
For example,
JSON is quite readable even in plaintext
XML has the angle bracket tax but is usable when using a good editor
INI is mostly human readable
CSV can be readable, but is best when loaded into a spreadsheet.
No one said, so I will: human-readability is not really a property of a file format (all files are binary after all), but rather of a file format and viewer app combination.
So called human readable formats are all based on top of additional abstraction layer of one of existing text encodings. And viewer programs (often also serving as an editor) that are capable of rendering these encodings in a form readable by humans are very common.
Text encoding standards are widespread and fairly mature, which means they're unlikely to evolve much in the foreseeable future.
Usually on top of the text encoding layer of the format we find a syntax layer that is reasonably intuitive given target user knowledge and cultural background.
Hence the benefits of "human-readable" formats:
Ubiquity of suitable viewers and editors.
Timelessness (given that cultural conventions won't change much).
Easiness-to-learn, read and modify.
Reliance on the extra abstraction layer makes text encoded files:
Space hungry.
Slower to process.
"Binary" files do not resort to text encoding abstraction layer as a base (or a common denominator), but they might or might not use some sort of an extra abstraction more suitable for their purpose and hence, they can be much better optimised for a specific task at hand meaning:
Faster processing.
Smaller footprint.
On the other hand:
Viewers and editors are specific for a particular binary format and make interoperability harder.
Viewers for any given format are less wide spread, because they are more specialised.
Formats might evolve significantly or go out of use over time: their main benefit in being very well suited for a particular task and as the task or task requirements evolve, so does the format.
Take a moment and think about application OTHER than web development.
The assumption that:
A) It has a meaning that is "obvious" in text format is false.
Things like control systems for a steel mill, or manufacturing plant don't typically have any advantage in being human readable. The software for those types of environments will typically have routines to display data in a graphically meaningful manner.
B) Outputting it in text is easier. Unnecessary conversions that actually require more code make a system LESS robust. The fact of the matter if you are NOT using a language which treats all variables as strings then human readable text is an extra conversion. I.E. Extra code means more code to be verified, tested and more opportunities to intro errors in the application.
C) You have to parse it anyway. It many cases for DSP systems I've worked on (I.E. NO Human readable interface to start with.) Data is streamed out of the system in uniformly sized packets. Logging the data for analysis and later processing is simply a matter of pointing to the beginning of a buffer and writing a multiple of the block size to the data logger system. This allows me to analysis the data "untouched" as the customer's system would see it where, once again, converting it to a different format would result in possibly introducing errors. Not only that, if you only save the "converted data" you may lose information in the translation that may help you diagnose a problem.
D) Text is a Natural format for the data. No hardware I've ever seen uses a "TEXT" interface. (My first job out of college was writing a device driver for a camera line scan camera.) The system build on top of it does MIGHT, but for every "PC".
For web pages where the information has a "natural" meaning in text format, so sure knock yourself out. For processing source code it’s a no brainer, of course. But the pervasive computing environments where even you refrigerator and TOOTHBRUSH are going to have a processor built in, not so much. Simply burdening these type of systems with the overhead of adding the ability to process text introduces unnessary complexity. You're not going to link "printf" into the software for an 8-bit micro that controls a mouse. (And yeah, somebody has to write that software too.)
The world is not a black and white place where the only forms of computing that need to be consider are PCs and Web servers.
Even on a PC, if I can directly load the data directly into a datastructure using a single OS read call and be done with it without writing serialize and deserializing routines, that's fantastic, check a blocks CRC job -- done on to the next problem.
Uhm… because human-readable file formats can be read by humans? Seems like a pretty good reason to me.
(Well, for configuration files it’s inevitable that they are read (and edited!) by humans. Files for persistent storage of some sort or the other don’t really need to be read or edited by humans.)
Why should I use a human readable file
format in preference to a binary one?
Is there ever a situation when this
isn't the case?
Yes, compressed volumes (zip, jpeg, mp3, etc) would be suboptimal if they were human readable.
I guess its not good in most situations probably. I think the main reason for these formats such as JSON and XML is because of web development, and general use over the web where you need to be able to process data on the user-side and you cant necessarily read binary. A good example of a bad case to use a human readable format would be any thing non textual such as images, video, audio. Ive noticed the use of non-binary formats being used in web development where it does not make sense, I feel guilty!
Often files become part of your human interface thus they should be human friendly (not programmer only)
The only time that I use a binary stream for files that aren't archives is when I want to conceal things from the casual observer. For instance, if I'm making temporary files that only my application should be editing, I'll use binary.
Its not an attempt to obfuscate, rather its just discouraging the user from editing the file by hand (which could break the application).
One instance where this would be a good idea is storing / saving running data about some game.. i.e. to save your game and continue later. Other scenarios would describe intermediate files, but those are typically binary / byte compiled anyway.
Why should I use a human readable file
format in preference to a binary one?
Depends on the content and context, i.e. where is the data coming from and going. If the data is typically directly written by a human, storing it in an format that can be manipulated through a text editor is a good idea. For example, program source code will normally be stored as human readable with good reason. However, if we are archiving it, or sharing it using a version control system, our storage strategy will change.
The human format is simplier to parsing and debugging if you have a problem with a field (example: a field contains a number where the spec says the this field must be a string), also the human format is closier to domain of problem.
I prefer the binary format with a lot of data AND i'm sure that I have the software for parsing him :)
When reading Fielding's dissertation about REST, I really liked the concept of "Architectural Properties"; one that sticked was "Visibility". That's what we're talking about here: being able to 'see' the data. Huge benefits when debugging the system.
One aspect that I find missing in the other answers: enforcing semantics.
From the moment you go for human readable, you allow the silly notepad user to create data to be fed into the system. No way to guarantee this data makes sense. No way to guarantee the system will respond in a sensible way.
So in the case you don't need to notepad-inspect your data, and you want to enforce valid data (by e.g. usage of an API) rather than first validating it, you better avoid human readable data. If debuggeability is an issue (it most often is), inspection of the data can be done by using the API, too.
Human readable is not equal to easier to be parsed by machine code.
Take human natural language as an example. :) Machine parsing of human language is still a pending problem to be fully solved.
So I agree with https://stackoverflow.com/a/714111/2727173 which has much deeper insight on this question.

Resources