I was just discussing if there was an alternative windows graphics library to GDI & Direct X in a forum. Someone mentioned WinDIB. Sadly, he didn't explain more about it.
Now I've searched google.
There seems to be no Wikipedia article on Windows DIB.
It appears that some graphics libraries such as SDL use a WinDIB backend.
So what exactly is WinDIB? Is there any documentation to it? Where can I learn more about it?
Windows Embedded CE and DirectX use the device-independent bitmap (DIB) as their native graphics file format.
A DIB is a file that contains information describing the following:
An image's dimensions,
The number of colors the image uses,
Values describing the colors used,
Data that describes each pixel.
A DIB also contains lesser-used parameters, like:
Information about file compression,
Significant colors (if all are not used),
Physical dimensions of the image (in case it will end up in print).
DIB files usually have the .bmp file extension, although they can use a .dib extension.
Because the DIB is so pervasive in Windows programming, Windows Embedded CE contains many functions you can use with DirectX.
SOURCE
http://msdn.microsoft.com/en-us/library/aa917106.aspx
DIB stands for Device Independent Bitmap. It is a Windows-specific general bitmap format. Essentially it is the format of a Windows [.bmp] file. It is very useful as an intermediate format in order to e.g. present OpenCV pictures in native Windows windows.
As an example, my ColorDib.h file, from the last few days, supports a limited subset of the DIB formats, namely palette-free RGB pictures. I have used that to display OpenCV video in a native Windows window. Actually, complete source for that is at the Bitbucket repository that the link goes to.
Microsoft does not offer very much Windows API level functionality for reading or writing BMP/DIB files. In the old days all that was there was OleLoadPicturePath and friends, plus reusing the web browser if you liked to do in very inefficient, complex and weird ways, plus some horrid code presented in the documentation. Then came GDI+, which, although far from perfect, simplified a lot. And nowadays it’s no problem except when programming purely at the API level, where a class such as the one I linked to above comes in very handy.
DIB is Device Independent Bitmap. More details on MSDN and Wikipedia. I expect that what was being referred to.
Related
I'm encoding images as video using FFmpeg using custom C code rather than linux commands because I am developing the code for an embedded system.
I am currently following through the first dranger tutorial and the code provided in the following question.
How to encode a video from several images generated in a C++ program without writing the separate frame images to disk?
I have found some "less abstract" code in the following github location.
https://github.com/FFmpeg/FFmpeg/blob/master/doc/examples/encode_video.c
And I plan to use it as well.
My end goal is simply to save video on an embedded system using embedded C source code, and I am coming up the curve too slowly. So in summary my question is, Does it seem like I am following the correct path here? I know that my system does not come with hardware for video codec conversion, which means I need to do it with software, but I am unsure if FFmpeg is even a feasible option for embedded work because I am yet to compile.
The biggest red flag for me thus far is that FFmpeg uses dynamic memory allocation. I am unfamiliar with how to assess the amount of dynamic memory that it uses. This is very important information to me, and if anyone is familiar with the amount of memory used or how to assess it before compiling, I would greatly appreciate the input.
After further research, it seems to me that encoding video is often a hardware intensive task that can use multiple processors and mega-gigbyte sizes of RAM. In order to avoid this I am performing a minimal amount of compression by utilizing the AVI format.
I have found that FFmpeg can't readily be utilized for raw-metal embedded systems because the initial "make" of the library sets up configuration settings specific to the computer compiling, which conflicts with the need to cross compile. I can see that there are cross compilation flags available, but I have not found any documentation describing how to use them. Either way I want to avoid big heaps and multi-threading, so I moved on.
I decided to look for more basic source code elsewhere. mikekohn.net/file_formats/libkohn_avi.php Is a great resource for very basic encoding without any complicated library dependencies or multi-threading. I am yet to implement, so no guarantees, but best of luck. This is actually one of the only understandable encoding source codes that I have found for image to video applications, other than https://www.jonolick.com/home/mpeg-video-writer. However, Jon Olick's source code uses lossy encoding and a minimum framerate (inherent to MPEG), both of which I am trying to avoid.
I want to be able to load/save jpeg files on Windows via api, specifically gdi32.dll because it looks to universally exist in all versions of Windows.
But I'm unable to find any information on how to do this from an array of pixels with 4 bytes per color (rgba, bgra, rgb would be ok to since jpeg doesn't support alpha etc.)
Not interested in an external library or gdi+. gdi32 should have the ability, but I can't seem to find enough information on how to implement it.
I am going to ignore your refusal to use anything outside of gdi32.dll, because that kind of requirement is not likely to help anyone, and as #David Heffernan said, there is no JPEG support in gdi32.dll.
There are a number of ways to load/save JPEG pictures built into winapi, and supported all the way back to Windows 2000 (and earlier...).
OleLoadPicture / OleSavePicture - though I am not sure if it's very easy to save your own JPEG files this way.
Gdiplus::Image allows loading & saving JPEG files.
Plain GDI does not have any support for JPEG.
If you won't countenance using a library other than GDI, then you will have to write your own JPEG library. Allow me to recommend that you reconsider your requirements.
The GDI is the Graphical Device Interface. It's responsibility includes rendering primitives to the screen or offscreen device contexts. Encoders and decoders are not included.
The standard Windows encoders and decoders are provided through the Windows Imaging Component. This component is available starting with Windows XP SP2. It is also available for Windows Store apps.
I'm looking for advice on how to generate videos in C. The main issues I'll be dealing with are
Must be open source, would prefer BSD type license but GPL is acceptable
Must be reasonably well documented (I'm looking at you FFMPEG)
Must be able to generate a non-compressed video
Must be able to draw each frame
Should be able to set the frame rate (though of course I can just make n identical frames)
My toolkit is the GNU development system on UNIX like systems (Linux, OS X, Cygwin, ...)
Having said that, I'm picky about these requirements because if I don't have them I know I can pretty easily generate the individual frames with libgd and use ffmpeg commands to output a video. The point is that I'd rather be able to draw them and generate the video entirely in my C code. Even better would be to be able to provide the library in my own source (BSD license) so that my users don't need to worry about getting things installed on their particular platform.
I'm not set on a video codec other than the availability of non-compressed video (I'm visualizing changes in simulated rotational spectroscopy as rotational parameters or other variables like temperature change). Advice on a particular codec welcome.
I am trying to develop a YUV image viewer. The objective is it read YUV images and displays the image in a window.I am using C to develop this application.
After transforming YUV information to RGB data, to view the image i am using cvShowImage and cvResize functions from OpenCV. To use this application in other systems i need opencv to be installed in them as i am using precompiled dll's. I fixed this issue by re-compiling the program with static libraries basing on the guide provided in "How to embedd openCV Dll's in Executable" and generated a fresh executable which is portable across machines. This resulted my application file size to grow from 100KB to 2350KB. This growth is enormous. I suspect this is because of several unnecessary functions are getting linked to my final executable
for this i used the switch Eliminate Unreferenced Data (/OPT:REF). But this did not solve anything.
Is there any way to solve this issue?
The linker automatically removes all the unneeded code from you exe.
But if you remember that your program incorporates
all the code to read all kinds of image formats (bmp, jpg, tiff, etc, etc, etc),
a good part of the OpenCV core (matrix handling)
some OS-specific windowing and message handling (to display the image and be able to resize/click/etc)
some other utilities that you use and do not know
That's it... a few MB of code.
EDIT
Do not forget to build your program in Release mode. In Debug mode, to the standard code there is added some more info related to debugging.
I have a video decrypter library that can decode an obsolete video format, and returns video frames in BMP and audio in WAV through callback functions. Now I need to encode a new video in any standard format under windows.
What might be the standard way to do this? I am using Win32/C. I guess Windows has its own encoding library and drivers and I don’t need to use FFMPEG. Any pointer to an example code or at least to a library to look at will be greatly helpful.
Edit: I accept. FFMPEG is the easiest way to do it.
On Windows, you have two native choices.
The old Windows Multimedia library which is too ancient to seriously consider, but does have the Video Compression Manager.
And then there's DirectShow.
It's certainly doable through DirectShow, but you better enjoy COM programming and the concepts of Filters, Graphs, and Monikers (oh my). See this tutorial for a crash course:
Recompressing an AVI File
And the MSDN documentation.
A simpler approach is indeed to use an library like FFMPEG or VLC.
To save yourself heartache, I echo Frank's suggestion of just using FFMPEG. Executing a separate FFMPEG process with the correct arguments will 100% be the easiest way to achieve your goals of encoding.
Your other options include:
libavcodec - The central library used in FFMPEG. I warn there don't appear to be many Windows binaries of libavcodec available, so you'd probably have to compile your own, which, at minimum, would require a Cygwin or MingW set up.
ffdshow-tryouts - A video codec library implemented as a DirectShow filter based on libavcodec. They do seem to have an API for manipulating it, but it's a .NET library.
I would suggest looking at the VirtualDub source code. It's a well known encoder that uses VFW. You may be able to get some ideas from that software.