OpenCV pretrained classifiers for face detection xml files - c

I am currently working on a face detection project using C language. I have studied abut different algorithms that openCV used for object and face detection. I noticed they have pre-trained data but they are stored in xml files. Will it be possible to read these xml files using C language so I can use it with my face detection program written on C?

Related

How to convert images to video using FFMpeg for embedded applications?

I'm encoding images as video using FFmpeg using custom C code rather than linux commands because I am developing the code for an embedded system.
I am currently following through the first dranger tutorial and the code provided in the following question.
How to encode a video from several images generated in a C++ program without writing the separate frame images to disk?
I have found some "less abstract" code in the following github location.
https://github.com/FFmpeg/FFmpeg/blob/master/doc/examples/encode_video.c
And I plan to use it as well.
My end goal is simply to save video on an embedded system using embedded C source code, and I am coming up the curve too slowly. So in summary my question is, Does it seem like I am following the correct path here? I know that my system does not come with hardware for video codec conversion, which means I need to do it with software, but I am unsure if FFmpeg is even a feasible option for embedded work because I am yet to compile.
The biggest red flag for me thus far is that FFmpeg uses dynamic memory allocation. I am unfamiliar with how to assess the amount of dynamic memory that it uses. This is very important information to me, and if anyone is familiar with the amount of memory used or how to assess it before compiling, I would greatly appreciate the input.
After further research, it seems to me that encoding video is often a hardware intensive task that can use multiple processors and mega-gigbyte sizes of RAM. In order to avoid this I am performing a minimal amount of compression by utilizing the AVI format.
I have found that FFmpeg can't readily be utilized for raw-metal embedded systems because the initial "make" of the library sets up configuration settings specific to the computer compiling, which conflicts with the need to cross compile. I can see that there are cross compilation flags available, but I have not found any documentation describing how to use them. Either way I want to avoid big heaps and multi-threading, so I moved on.
I decided to look for more basic source code elsewhere. mikekohn.net/file_formats/libkohn_avi.php Is a great resource for very basic encoding without any complicated library dependencies or multi-threading. I am yet to implement, so no guarantees, but best of luck. This is actually one of the only understandable encoding source codes that I have found for image to video applications, other than https://www.jonolick.com/home/mpeg-video-writer. However, Jon Olick's source code uses lossy encoding and a minimum framerate (inherent to MPEG), both of which I am trying to avoid.

Auto-detect language of file

Is there a way to auto-detect the language that a file is written in or a way to say "this file is 20% C, 30% python, 50% shell." There must be some way because Github's remote server seems to autodetect languages. Also, if the file is a hybrid of languages, what is the de-facto way to set the file extension so that it represents those languages that are in the file. Maybe files have to all be homogeneous in regards to language. I am still learning. Additionally, is there a way to autodetect bytes of a codebase on a remote site like Github. So basically like Github's bar for languages except the bar shows how many bytes the project is taking up.
The file command on Linux does a reasonable job of guessing the language of a file, but basically it's just looking at the first characters of a file and comparing them to known situations: "if the file starts with blah-blah-blah it is probably thus-and-so".
As far as the file containing "20% C, 30% Python, etc" -- what would you do with such a file if you had one? Neither the C compiler nor the Python compiler would be happy with it.
I think Github uses file extensions to decide what language a code is written in.
As for auto-detecting file extension using the language, I suppose you could create a classification model.
You will have to create a large dataset with many files in different languages and their corresponding labels (language name). Then feed that training data to a neural network (maybe RNN-LSTM) to train the model. Then use that model on new data to predict language based on code.
I have never done something like this. But it would be a fun project.

Conversion of simulink model into C code using code generation

I need to know the various types of toolboxes available in simulink to convert simulink model into C code.
Is there any general steps to follow to convert the model into C? If there is any step, please do guide me.
Thanking you in advance for your valuable guidance.
Yes, the products are Simulink Coder, which requires MATLAB Coder, and is used for generating C/C++ code from Simulink models to be used for rapid prototyping, hardware-in-the loop testing, simulation acceleration or simply an executable to be run outside of MATLAB and Simulink.
If you want to deploy the generate code on an embedded platform, customize or optimise the way the code is generated, you also need Embedded Coder.
I suggest as a starting point, to watch the videos on the Simulink Coder web page. Be aware though that code generation is a rather complex topic, for advanced users, and the tools are expensive.

Can XML files be read quickly using pure C?

I am designing a face detection system using pure C. I don't wanna go though too much trouble of coding the training algorithm in C when it is easily available in openCV for Viola-Jones and LBP. However, i noticed that in openCV the training data is stored in XML files. I am planning to run the detector on a TI development board and also use the DSP in the board. The board is running an ubuntu OS so it has a file system but am trying to make the detector work as fast as possible and I don't want the accessing of the training data to affect my detector speed. So is it a good idea to proceed with XML files? Or is there a way to convert these files to CSV which I think is easier to read from.
Thanks in advance

Viewing images in a window using C Program?

I am trying to develop a YUV image viewer. The objective is it read YUV images and displays the image in a window.I am using C to develop this application.
After transforming YUV information to RGB data, to view the image i am using cvShowImage and cvResize functions from OpenCV. To use this application in other systems i need opencv to be installed in them as i am using precompiled dll's. I fixed this issue by re-compiling the program with static libraries basing on the guide provided in "How to embedd openCV Dll's in Executable" and generated a fresh executable which is portable across machines. This resulted my application file size to grow from 100KB to 2350KB. This growth is enormous. I suspect this is because of several unnecessary functions are getting linked to my final executable
for this i used the switch Eliminate Unreferenced Data (/OPT:REF). But this did not solve anything.
Is there any way to solve this issue?
The linker automatically removes all the unneeded code from you exe.
But if you remember that your program incorporates
all the code to read all kinds of image formats (bmp, jpg, tiff, etc, etc, etc),
a good part of the OpenCV core (matrix handling)
some OS-specific windowing and message handling (to display the image and be able to resize/click/etc)
some other utilities that you use and do not know
That's it... a few MB of code.
EDIT
Do not forget to build your program in Release mode. In Debug mode, to the standard code there is added some more info related to debugging.

Resources