RGB pictures detected as YCbCr with libjpeg-turbo - c

I'm using libjpeg-turbo to open jpeg pictures in a C program. Pictures in RGB colorspace are detected as YCbCr. Grayscale and CMYK colorspaces are correctly detected.
I though about a problem related to 2 different versions of jpeglib.h (where J_COLOR_SPACE enum is defined) or libjpeg conflicting with libjpeg-turbo, but there is just one jpeglib.h on the environment I'm using for compiling libjpeg-turbo and my program.
Any idea much appreciated.

Related

SDL2 pre-multiplying alpha channel when loading surface on OS X?

I'm loading 32 bit RGBA normalmap textures, with a heightmap encoded in the alpha channel, via SDL2 2.0.7 and SDL2_image 2.0.2 on OS X Sierra.
Every pixel in these textures has a non-zero RGB value, encoding a directional normal vector. A directional vector of (0, 0, 0) (i.e. black) is invalid.
And yet, when I load such a texture via SDL2_image, the areas of the texture with an alpha value of 0 also yield RGB values of 0. I think SDL is perhaps pre-multiplying the alpha value for these pixels?
Attached is one of these normalmap textures. You can confirm it is valid by opening the texture in e.g. GIMP and using the color picker on one of the transparent areas. You'll see that, indeed, the transparent areas still have an RGB color that is blue-ish (an encoded normal vector).
And below is a minimal test case illustrating the issue for the attached PNG file:
#include <SDL_image.h>
#include <assert.h>
int main(int argc, char **argv) {
SDL_Surface *s = IMG_Load("green3_2_nm.png");
assert(s);
for (int i = 0; i < s->w * s->h; i++) {
const Uint32 *in = (Uint32 *) (s->pixels + i * s->format->BytesPerPixel);
SDL_Color color;
SDL_GetRGBA(*in, s->format, &color.r, &color.g, &color.b, &color.a);
assert(color.r || color.g || color.b);
}
SDL_FreeSurface(s);
return 0;
}
I'm compiling this test case with gcc $(pkg-config --cflags --libs sdl2_image) test.c
The assertion on line 15 will fail several rows into the image -- i.e. exactly where the alpha value drops to 0.
I have tried both TGA and PNG image formats, but SDL does the same thing to both of them.
Is this a bug in SDL, or am I missing something? I'm curious if folks see this same issue on other platforms as well.
===
Answer: Core Graphics, the default image loading backend for SDL2_image on Apple OS X, does indeed pre-multiply alpha -- always. The solution is to recompile SDL2_image without Core Graphics support, and instead enable libpng, libjpeg, and any other image codecs you require:
./configure \
--disable-imageio \
--disable-png-shared \
--disable-tif-shared \
--disable-jpg-shared \
--disable-webp-shared
On my system, I had to disable Core Graphics (imageio) and also the shared library loading of the other codecs, as you can see. This produced a fat SDL2_image.so that was statically linked against libpng, libjpg, etc.. but worked as expected.
SDL_image is a wrapper around platform-specific image loading code, rather than using the same image loader on all platforms.
Linux: LibPNG, LibJPEG
macOS, iOS: Core Graphics
Windows: WinCodec
This has the advantage of reducing the size of SDL_image, since it doesn't have to ship with any image decoding code, and can instead link dynamically against something that's likely installed on your system already. However, on macOS and iOS, Core Graphics does not support non-premultiplied alpha, so SDL_image has to reverse it.
See: mac-opengl - Re: kCGImageAlphaFirst not implemented (Was: (no subject)) from May 2007 (from the Wayback machine):
Honestly, I wouldn't expect CGBitmapContextCreate() to support non- premultiplied alpha any time soon.
...
I'm not sure if using ImageIO + CoreGraphics was ever really targetted at being used for an image loading scheme for OpenGL applications.
This behavior was discovered in LibSDL bug #838 - OSX SDL darkens colours proportional to increase in alpha and a workaround was introduced in SDL_image changeset 240.
You can see that the workaround merely un-premultiplies the alpha, this is a horribly lossy process.
To address this, you could build your own version of SDL_image on macOS that uses LibPNG. This should be possible just through configuration, you should not have to make any changes to the SDL_image code itself. To do this, use the --disable-imageio option. SDL_image ships with its own copy of the LibPNG code, so you should not need to install LibPNG in order to get this to work.

Image output in C

Quick question, is there a way to show an image(ex. bmp) from file using C? It's not in graphics.h apparently, and I can't use Allegro because it does not support Borland(or so I've read). I need to use the very old compiler for a school project. I would like to ask if anyone had any experience of doing this using other libraries? If yes, which library was it? Thanks a lot.
I hope you have visual (windows) borland like Borland C++ builder 3++ or turbo C++ not the MS DOS one. in that case it is quite easy because you can use bitmap which is part of VCL so no additional include is needed.
here you can find some hints on rendering under borland
now how to visualize picture from file to your window:
// this will create and load your bitmap
Graphics::TBitmap *bmp=new Graphics::TBitmap;
bmp->LoadFromFile("image.bmp");
bmp->HandleType=bmDIB;
bmp->PixelFormat=pf32bit;
// on paint you can draw your image to form,paintbox,another bitmap or whatever...
Form1->Canvas->Draw(0,0,bmp); // also you can use stretch draw or copy rectangle GDI functions
// before exiting delete the bmp
delete bmp;
[Notes]
You can also save image by bmp->SaveToFile("out.bmp"); In case you need jpg then add:
#include <jpeg.hpp>
TJPEGImage *jpg=new TJPEGImage;
jpg->LoadFromFile("image.jpg");
bmp->Assign(jpg);
delete jpg;
this will load jpg to your bmp also you can save jpg as well in the same way. Beware older Borlands has a bug in TJPEGImage and will crash if the jpg resolution is too big**

How do you turn a .png file into a 2-d matrix?

I am working on a project for a Bio-medical Imaging course (Note: it is a non-programming course, so asking for programming help is not cheating. Asking for conceptual help with planning would be cheating.) where I need to manipulate an image using different mathematical transforms. I am writing in C so it can be as fast as possible. I have finished the code for the mathematical transforms, but I have realized that I do not know how to turn a grayscale .png file into a 2-d matrix/array to compute with, and I do not know how to display a .png file in C. Can anyone help me?
I'm trying to turn the "image.png" image into a 2d array where each entry in the array has a value between 0 - 255 and corresponds with each pixel in "image.png". I also want to turn a 2d array where each entry corresponds to a pixel in the image and has a value between 0 - 255 into a new "image_two.png" file.
I'm a somewhat new programmer. I have a solid base in python programming, but C is new for me. I have done a lot of research and I have found a lot of people talking about using "this library" or "that library", or also "this library", but how do I use a downloaded library in C? It's unfamiliar territory for me as a python programmer :(
I'm using Ubuntu 12.04
To reiterate:
How do you read a grayscale .png image as a 2-d array/matrix in C?
How do you display a 2-d array/matrix as a grayscale image in C?
How do you use a downloaded library in C code (specifically for the two questions above)? I found out how to use these libraries.
EDIT: I am still having trouble figuring out how to create a grayscale 2d array out of a .png file and how to make a .png file out of a grayscale 2d matrix. Can anyone else help?
You can use a more general purpose image handling library and you might find it easier to use. I recommend FreeImage http://freeimage.sourceforge.net/. See the pixel access section of the manual to get access to the pixel data. You can then work with it directly or copy it into your own matrix.
To install a library in Linux, typically you will use the package manager. For example, in Debian (this includes Ubuntu) you might do:
$ apt-cache search libpng
You'll decide which package to install based on the results of running this command and then you will run
$ sudo apt-get install <package-name>
This command will likely install png.h in a location that is already included in gcc's search path. This means that to use png.h in your program, all you have to do is include it.
#include <png.h>
Skip to chapter 3 in the libpng manual for a walkthrough on reading a png file.

What image formats are supported by Gdk-Pixbuf (Gtk-Image?) by Default?

I know that Gdk-Pixbuf supports png and jpg, but I cannot find an exact list of all the completely (or partially) supported image formats anywhere on the internet. It is necessary for my current project, since I need to check the extension of every file in a directory and determine whether it is supported or not by gdk-pixbuf. Any help?
I know this is 5+ years old but I had trouble finding this for PyGI / PyGObject (3.22.0).
import gi.repository.GdkPixbuf as pixbuf
Then we can get all the formats using:
for f in pixbuf.Pixbuf.get_formats():
print f.get_name()
On my system (might be different on yours if you installed other loaders), I get:
ani
bmp
GdkPixdata
gif
icns
ico
jpeg
png
pnm
qtif
svg
tga
tiff
wmf
xbm
xpm
Calling gdk_pixbuf_get_formats() in your application will tell you which formats your copy of GDKPixbuf can load.
This should be available by querying gdk-pixbuf-loaders.
Here is more information on pixbuf modules and supported formats.

Extracting Spot Color equivalents from TIFF

I'm trying to get the Spot color information from a TIFF file, it normally shows up under 'channels' in Photoshop. Each extra channel would have a name, which is usually a Pantone swatch name, and a CMYK equivalent.
So far, I'm getting the TIFFTAG_PHOTOSHOP with libtiff, and stepping through the blocks within. I'm finding the IRB WORD 0x03EE, which gives me the channel names, and IRB WORD 0x03EF which gives me their color equivalents...
BUT the color equivalents are in CIELab format (Luminance, and two axis of color space data) so I'm trying to use littleCMS to convert just a few TIFF packed Lab color to CMYK.
My question: Is there an easier way? The CMYK is just an approximation of the Pantone, so if there was a quick rough translation from Lab to CMYK, I would use it.
The answer was to use the photoshop docs to parse out the binary photoshop block in the tiff file and grab the fields I needed with bit manipulation.
littleCMS did the job of LAB -> CMYK just right.

Resources