I have a 16-bit tiff image with no color profile (camera profile embedded) and I am trying to read its RGB values in OpenCV. However, comparing the output values to the values given when the image is opened by GIMP for example gives totally different values (GIMP being opened with keeping the image's profile option; no profile conversion). I have tried also another image studio software like CaptureOne and the result accords with GIMP differs from OpenCV output.
Not sure if reading and opening the image in OpenCV is wrong somehow, in spite of using IMREAD_UNCHANGED flag.
I have as well tried to read the image using FreeImage library but still the same result.
Here is a snippet of the code reading pixels' values in OpenCV
const float Conv16_8 = 255.f / 65535.f;
cv::Vec3d curVal;
// upperLeft/lowerRight are just some pre-defined corners for the ROI
for (int row = upperLeft.y; row <= lowerRight.y; row++) {
unsigned char* dataUCPtr = img.data + row * img.step[0];
unsigned short* dataUSPtr = (unsigned short*)dataUCPtr;
dataUCPtr += 3 * upperLeft.x;
dataUSPtr += 3 * upperLeft.x;
for (int col = upperLeft.x; col <= lowerRight.x; col++) {
if (/* some check if the pixel is valid */) {
if (img.depth() == CV_8U) {
for (int chan = 0; chan < 3; chan++) {
curVal[chan] = *dataUCPtr++;
}
}
else if (img.depth() == CV_16U) {
for (int chan = 0; chan < 3; chan++) {
curVal[chan] = (*dataUSPtr++)*Conv16_8;
}
}
avgX += curVal;
}
else {
dataUCPtr += 3;
dataUSPtr += 3;
}
}
}
and here is the image (download the image) I am reading with its RGB readouts in
CaptureOne Studio AdobeRGB:
vs OpenCV RGB (A1=white --> F1=Black):
PS1: I have tried also to change the output color space in GIMP/CaptureOne to sRGB but still the difference is almost the same, not any closer to OpenCV
PS2: I am reversing OpenCV imread channels' order before extracting the RGB values from the image COLOR_RGB2BGR
OP said:
I have a 16-bit tiff image with no color profile (camera profile embedded)
Well no, your image definitely has a color profile, and it should not be ignored. The embedded profile is as important as the numeric values of each pixel. Without a defined profile, the pixel values are somewhat meaningless.
From what I can tell, OpenCV does not linearize gamma by default... except when it does... Regardless, the gamma indicated in the profile is unique:
Now compare that to sRGB:
So the sRGB transformations can't be used.
If you are looking for performance, applying the curve via LUT is usually more efficient than a full-on color management system.
In this case, using a LUT. The following LUT was taken from the color profile, 16bit values, and 256 steps:
// Phase One TRC from color profile
profileTRC = [0x0000,0x032A,0x0653,0x097A,0x0CA0,0x0FC2,0x12DF,0x15F8,0x190C,0x1C19,0x1F1E,0x221C,0x2510,0x27FB,0x2ADB,0x2DB0,0x3079,0x3334,0x35E2,0x3882,0x3B11,0x3D91,0x4000,0x425D,0x44A8,0x46E3,0x490C,0x4B26,0x4D2F,0x4F29,0x5113,0x52EF,0x54BC,0x567B,0x582D,0x59D1,0x5B68,0x5CF3,0x5E71,0x5FE3,0x614A,0x62A6,0x63F7,0x653E,0x667B,0x67AE,0x68D8,0x69F9,0x6B12,0x6C23,0x6D2C,0x6E2D,0x6F28,0x701C,0x710A,0x71F2,0x72D4,0x73B2,0x748B,0x755F,0x762F,0x76FC,0x77C6,0x788D,0x7951,0x7A13,0x7AD4,0x7B93,0x7C51,0x7D0F,0x7DCC,0x7E8A,0x7F48,0x8007,0x80C8,0x8189,0x824C,0x8310,0x83D5,0x849B,0x8562,0x862B,0x86F4,0x87BF,0x888A,0x8956,0x8A23,0x8AF2,0x8BC0,0x8C90,0x8D61,0x8E32,0x8F04,0x8FD7,0x90AA,0x917E,0x9252,0x9328,0x93FD,0x94D3,0x95AA,0x9681,0x9758,0x9830,0x9908,0x99E1,0x9ABA,0x9B93,0x9C6C,0x9D45,0x9E1F,0x9EF9,0x9FD3,0xA0AD,0xA187,0xA260,0xA33A,0xA414,0xA4EE,0xA5C8,0xA6A1,0xA77B,0xA854,0xA92D,0xAA05,0xAADD,0xABB5,0xAC8D,0xAD64,0xAE3B,0xAF11,0xAFE7,0xB0BC,0xB191,0xB265,0xB339,0xB40C,0xB4DE,0xB5B0,0xB680,0xB750,0xB820,0xB8EE,0xB9BC,0xBA88,0xBB54,0xBC1F,0xBCE9,0xBDB1,0xBE79,0xBF40,0xC005,0xC0CA,0xC18D,0xC24F,0xC310,0xC3D0,0xC48F,0xC54D,0xC609,0xC6C5,0xC780,0xC839,0xC8F2,0xC9A9,0xCA60,0xCB16,0xCBCA,0xCC7E,0xCD31,0xCDE2,0xCE93,0xCF43,0xCFF2,0xD0A0,0xD14D,0xD1FA,0xD2A5,0xD350,0xD3FA,0xD4A3,0xD54B,0xD5F2,0xD699,0xD73E,0xD7E3,0xD887,0xD92B,0xD9CE,0xDA6F,0xDB11,0xDBB1,0xDC51,0xDCF0,0xDD8F,0xDE2C,0xDEC9,0xDF66,0xE002,0xE09D,0xE138,0xE1D2,0xE26B,0xE304,0xE39C,0xE434,0xE4CB,0xE562,0xE5F8,0xE68D,0xE722,0xE7B7,0xE84B,0xE8DF,0xE972,0xEA04,0xEA97,0xEB29,0xEBBA,0xEC4B,0xECDC,0xED6C,0xEDFC,0xEE8B,0xEF1A,0xEFA9,0xF038,0xF0C6,0xF154,0xF1E1,0xF26F,0xF2FC,0xF388,0xF415,0xF4A1,0xF52D,0xF5B9,0xF645,0xF6D0,0xF75B,0xF7E6,0xF871,0xF8FC,0xF987,0xFA11,0xFA9B,0xFB26,0xFBB0,0xFC3A,0xFCC4,0xFD4E,0xFDD7,0xFE61,0xFEEB,0xFF75,0xFFFF]
If a matching array of the linearized values was needed, it would be
[0x0000,0x0101,0x0202,0x0303,0x0404....
But such an array is not neded for most uses, as the index value of the PhaseOne TRC array directly relates to the linear value.
I.e. phaseOneTRC[0x80] is 0xAD64
and the linear value is 0x80 * 0x101.
It turned out, it is all about having, loading and applying the proper ICC profile on the cv::Mat data. To do that one must use a color management engine along side OpenCV such as LittleCMS.
I'll try to make that question as concise as possible, but don't hesitate to ask for clarification.
I'm dealing with legacy code, and I'm trying to load thousands of 8 bit images from the disk to create a texture for each.
I've tried multiple things, and I'm at the point where I'm trying to load my 8 bits images into a 32 bits surface, and then create a texture from that surface.
The problem : while loading and 8 bit image onto a 32 bit surface is working, when I try to SDL_CreateTextureFromSurface, I end up with a lot of textures that are completely blank (full of transparent pixels, 0x00000000).
Not all textures are wrong, thought. Each time I run the program, I get different "bad" textures. Sometimes there's more, sometimes there's less. And when I trace the program, I always end up with a correct texture (is that a timing problem?)
I know that the loading to the SDL_Surface is working, because I'm saving all the surfaces to the disk, and they're all correct. But I inspected the textures using NVidia NSight Graphics, and more than half of them are blank.
Here's the offending code :
int __cdecl IMG_SavePNG(SDL_Surface*, const char*);
SDL_Texture* Resource8bitToTexture32(SDL_Renderer* renderer, SDL_Color* palette, int paletteSize, void* dataAddress, int Width, int Height)
{
u32 uiCurrentOffset;
u32 uiSourceLinearSize = (Width * Height);
SDL_Color *currentColor;
char strSurfacePath[500];
// The texture we're creating
SDL_Texture* newTexture = NULL;
// Load image at specified address
SDL_Surface* tempSurface = SDL_CreateRGBSurface(0x00, Width, Height, 32, 0x00FF0000, 0x0000FF00, 0x000000FF, 0xFF000000);
SDL_SetSurfaceBlendMode(tempSurface, SDL_BLENDMODE_NONE);
if(SDL_MUSTLOCK(tempSurface)
SDL_LockSurface(tempSurface);
for(uiCurrentOffset = 0; uiCurrentOffset < uiSourceLinearSize; uiCurrentOffset++)
{
currentColor = &palette[pSourceData[uiCurrentOffset]];
if(pSourceData[uiCurrentOffset] != PC_COLOR_TRANSPARENT)
{
((u32*)tempSurface->pixels)[uiCurrentOffset] = (u32)((currentColor->a << 24) + (currentColor->r << 16) + (currentColor->g << 8) + (currentColor->b << 0));
}
}
if(SDL_MUSTLOCK(tempSurface)
SDL_UnlockSurface(tempSurface);
// Create texture from surface pixels
newTexture = SDL_CreateTextureFromSurface(renderer, tempSurface);
// Save the surface to disk for verification only
sprintf(strSurfacePath, "c:\\tmp\\surfaces\\%s.png", GenerateUniqueName());
IMG_SavePNG(tempSurface, strSurfacePath);
// Get rid of old loaded surface
SDL_FreeSurface(tempSurface);
return newTexture;
}
Note that in the original code, I'm checking for boundaries, and for NULL after the SDL_Create*. I'm also aware that it would be better to have a spritesheet for the textures instead of loading each texture individually.
EDIT :
Here's a sample of what I'm observing in NSight if I capture a frame and use the Resources View.
The first 3186 textures are correct. Then I get 43 empty textures. Then I get 228 correct textures. Then 100 bad ones. Then 539 correct ones. Then 665 bad ones. It goes on randomly like that, and it changes each time I run my program.
Again, each time the surfaces saved by IMG_SavePNG are correct. This seems to indicate that something happens when I call SDL_CreateTextureFromSurface but at that point, I don't want to rule anything out, because it's a very weird problem, and it smells undefined behaviour all over the place. But I just can't find the problem.
With the help of #mark-benningfield, I was able to find the problem.
TL;DR
There's a bug (or at least, an undocumented feature) in SDL with the DX11 renderer. There's a work-around ; see at the end.
CONTEXT
I'm trying to load around 12,000 textures when my program start. I know it's not a good idea, but I was planning on using that as a stepping-stone to another more sane system.
DETAILS
What I realized while debugging that problem is that the SDL renderer for DirectX 11 does that when it creates a texture :
result = ID3D11Device_CreateTexture2D(rendererData->d3dDevice,
&textureDesc,
NULL,
&textureData->mainTexture
);
The Microsoft's ID3D11Device::CreateTexture2D method page indicates that :
If you don't pass anything to pInitialData, the initial content of the memory for the resource is undefined. In this case, you need to write the resource content some other way before the resource is read.
If we're to believe that article :
Default Usage
The most common type of usage is default usage. To fill a default texture (one created with D3D11_USAGE_DEFAULT) you can :
[...]
After calling ID3D11Device::CreateTexture2D, use ID3D11DeviceContext::UpdateSubresource to fill the default texture with data from a pointer provided by the application.
So it looks like that D3D11_CreateTexture is using the second method of the default usage to initialize a texture and its content.
But right after that, in the SDL, we call SDL_UpdateTexture (without checking the return value ; I'll get to that later). If we dig until we get the the D3D11 renderer, we get that :
static int
D3D11_UpdateTextureInternal(D3D11_RenderData *rendererData, ID3D11Texture2D *texture, int bpp, int x, int y, int w, int h, const void *pixels, int pitch)
{
ID3D11Texture2D *stagingTexture;
[...]
/* Create a 'staging' texture, which will be used to write to a portion of the main texture. */
ID3D11Texture2D_GetDesc(texture, &stagingTextureDesc);
[...]
result = ID3D11Device_CreateTexture2D(rendererData->d3dDevice, &stagingTextureDesc, NULL, &stagingTexture);
[...]
/* Get a write-only pointer to data in the staging texture: */
result = ID3D11DeviceContext_Map(rendererData->d3dContext, (ID3D11Resource *)stagingTexture, 0, D3D11_MAP_WRITE, 0, &textureMemory);
[...]
/* Commit the pixel buffer's changes back to the staging texture: */
ID3D11DeviceContext_Unmap(rendererData->d3dContext, (ID3D11Resource *)stagingTexture, 0);
/* Copy the staging texture's contents back to the texture: */
ID3D11DeviceContext_CopySubresourceRegion(rendererData->d3dContext, (ID3D11Resource *)texture, 0, x, y, 0, (ID3D11Resource *)stagingTexture, 0, NULL);
SAFE_RELEASE(stagingTexture);
return 0;
}
Note : code snipped for conciseness.
This seems to indicate, based on that article I mentioned, that SDL is using the second method of the Default Usage to allocate the texture memory on the GPU, but uses the Staging Usage to upload the actual pixels.
I don't know that much about DX11 programming, but that mixing up of techniques got my programmer's sense tingling.
I contacted a game programmer I know and explained the problem to him. He told me the following interesting bits :
The driver gets to decide where it's storing staging textures. It usually lies in CPU RAM.
It's much better to specify a pInitialData pointer, as the driver can decide to upload the textures asynchronously.
If you load too many staging textures without commiting them to the GPU, you can fill up the RAM.
I then wondered why SDL didn't return me a "out of memory" error at the time I called SDL_CreateTextureFromSurface, and I found out why (again, snipped for concision) :
SDL_Texture *
SDL_CreateTextureFromSurface(SDL_Renderer * renderer, SDL_Surface * surface)
{
[...]
SDL_Texture *texture;
[...]
texture = SDL_CreateTexture(renderer, format, SDL_TEXTUREACCESS_STATIC,
surface->w, surface->h);
if (!texture) {
return NULL;
}
[...]
if (SDL_MUSTLOCK(surface)) {
SDL_LockSurface(surface);
SDL_UpdateTexture(texture, NULL, surface->pixels, surface->pitch);
SDL_UnlockSurface(surface);
} else {
SDL_UpdateTexture(texture, NULL, surface->pixels, surface->pitch);
}
[...]
return texture;
}
If the creation of the texture is successful, it doesn't care whether or not it succeeded in updating the textures (no check on SDL_UpdateTexture's return value).
WORKAROUND
The poor-man's workaround to that problem is to call SDL_RenderPresent each time you call a SDL_CreateTextureFromSurface.
It's probably fine to do it once every hundred textures depending on your texture size. But just be aware that calling SDL_CreateTextureFromSurface repeatedly without updating the renderer will actually fill up the system RAM, and the SDL won't return you any error condition to check for this.
The irony of this is that had I implemented a "correct" loading loop with percentage of completion on screen, I would never had that problem. But fate had me implement this the quick-and-dirty way, as a proof of concept for a bigger system, and I got sucked into that problem.
I am trying to use the C interface of CoreGraphics & CoreFoundation to save a buffer of 32-bit RGBA data (as a void*) to a PNG file. When I try to finialize the CGImageDestinationRef, the following error message is printed to the console:
libpng error: No IDATs written into file
As far as I can tell, the CGImageRef I'm adding to the CGImageDestinationRef is valid.
Relavent Code:
void saveImage(const char* szImage, void* data, size_t dataSize, size_t width, size_t height)
{
CFStringRef name = CFStringCreateWithCString(NULL, szImage, kCFStringEncodingASCII);
CFURLRef texture_url = CFURLCreateWithFileSystemPath(
NULL,
name,
kCFURLPOSIXPathStyle,
false);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGDataProviderRef dataProvider = CGDataProviderCreateWithData(NULL, data, dataSize, NULL);
CGImageRef image = CGImageCreate(width, height, 8, 32, 32 * width, colorSpace,
kCGImageAlphaLast | kCGBitmapByteOrderDefault, dataProvider,
NULL, FALSE, kCGRenderingIntentDefault);
// From Image I/O Programming Guide, "Working with Image Destinations"
float compression = 1.0; // Lossless compression if available.
int orientation = 4; // Origin is at bottom, left.
CFStringRef myKeys[3];
CFTypeRef myValues[3];
CFDictionaryRef myOptions = NULL;
myKeys[0] = kCGImagePropertyOrientation;
myValues[0] = CFNumberCreate(NULL, kCFNumberIntType, &orientation);
myKeys[1] = kCGImagePropertyHasAlpha;
myValues[1] = kCFBooleanTrue;
myKeys[2] = kCGImageDestinationLossyCompressionQuality;
myValues[2] = CFNumberCreate(NULL, kCFNumberFloatType, &compression);
myOptions = CFDictionaryCreate( NULL, (const void **)myKeys, (const void **)myValues, 3,
&kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks);
CFStringRef type = CFStringCreateWithCString(NULL, "public.png", kCFStringEncodingASCII);
CGImageDestinationRef dest = CGImageDestinationCreateWithURL(texture_url, type, 1, myOptions);
CGImageDestinationAddImage(dest, image, NULL);
if (!CGImageDestinationFinalize(dest))
{
// ERROR!
}
CFRelease(image);
CFRelease(colorSpace);
CFRelease(dataProvider);
CFRelease(dest);
CFRelease(texture_url);
}
This post is similar, except I'm not using the Objective C interface: Saving a 32 bit RGBA buffer into a .png file (Cocoa OSX)
Answering my own questions:
In addition to the issues pointed out by NSGod, the IDAT issue was an invalid parameter to CGImageCreate(): parameter 5 is bytesPerRow, not bitsPerRow. So 32 * width was incorrect; 4 * width is correct.
Despite what this page of the official documentation lists, UTCoreTypes.h is located in the CoreServices.framework for MacOSX, not MobileCoreServices.framework.
There are numerous issues with your code.
Here it is rewritten how I would do it:
void saveImage(const char* szImage, void* data, size_t dataSize, size_t width, size_t height)
{
CFStringRef name = CFStringCreateWithCString(NULL, szImage, kCFStringEncodingUTF8);
CFURLRef texture_url = CFURLCreateWithFileSystemPath(
NULL,
name,
kCFURLPOSIXPathStyle,
false);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGDataProviderRef dataProvider = CGDataProviderCreateWithData(NULL, data,
dataSize, NULL);
CGImageRef image = CGImageCreate(width, height, 8, 32, 32 * width, colorSpace,
kCGImageAlphaLast | kCGBitmapByteOrderDefault,
dataProvider, NULL, FALSE, kCGRenderingIntentDefault);
// From Image I/O Programming Guide, "Working with Image Destinations"
float compression = 1.0; // Lossless compression if available.
int orientation = 4; // Origin is at bottom, left.
CFStringRef myKeys[3];
CFTypeRef myValues[3];
CFDictionaryRef myOptions = NULL;
myKeys[0] = kCGImagePropertyOrientation;
myValues[0] = CFNumberCreate(NULL, kCFNumberIntType, &orientation);
myKeys[1] = kCGImagePropertyHasAlpha;
myValues[1] = kCFBooleanTrue;
myKeys[2] = kCGImageDestinationLossyCompressionQuality;
myValues[2] = CFNumberCreate(NULL, kCFNumberFloatType, &compression);
myOptions = CFDictionaryCreate(NULL, (const void **)myKeys,
(const void **)myValues, 3, &kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
CGImageDestinationRef dest =
CGImageDestinationCreateWithURL(texture_url, kUTTypePNG, 1, NULL);
CGImageDestinationAddImage(dest, image, NULL);
CGImageDestinationSetProperties(dest, myOptions);
if (!CGImageDestinationFinalize(dest))
{
// ERROR!
}
}
First, never use ASCII when dealing with file system paths, use UTF8. Second, you were constructing a dictionary to be used to set the properties of the image, but you were using it with the wrong function. The documentation for CGImageDestinationCreateWithURL() says the following:
CGImageDestinationCreateWithURL
Creates an image destination that writes to a location specified by a
URL.
CGImageDestinationRef CGImageDestinationCreateWithURL (
CFURLRef url,
CFStringRef type,
size_t count,
CFDictionaryRef options
);
Parameters
options - Reserved for future use. Pass NULL.
You were trying to pass a dictionary of properties when you were supposed to pass NULL. (Also, you can simply use the kUTTypePNG Uniform Type Identifier string constant instead of re-creating it). First call CGImageDestinationCreateWithURL(), then call CGImageDestinationAddImage() to add the image, then call CGImageDestinationSetProperties() and pass in the dictionary of properties you created.
[UPDATE]: If after these changes you're still having libpng error: No IDATs written into file issues, try the following: First, make sure that dataProvider is non-NULL-- in other words, make sure the CGDataProviderCreateWithData() function succeeded. Second, if dataProvider is valid, perhaps try changing the options from kCGImageAlphaLast | kCGBitmapByteOrderDefault to simply kCGImageAlphaPremultipliedLast and see if it succeeds.