LoadImage is working differently based on win color setting - c

HI,
I am using loadImage to load an 24bit bmp file and then try to get the bmp info
hBitmap = (HBITMAP)LoadImage(NULL, "logo.bmp", IMAGE_BITMAP, 0, 0,
LR_LOADFROMFILE | LR_DEFAULTSIZE)
GetObject( hBitmap, sizeof(BITMAP), &bm );
When I do the same operation with windows color display setting 32 hi color than i got the following value
bmBitsPixel = 32 but if i set the windows color display to 16 than i got
bmBitsPixel = 16
Can any one please explain what does it mean.
I if i used the following formula to calculate the size of the bmp than the size of the bmp depends on the window color setting.
size = bmWidth * bmHeight* bmBitsPixel/8
Thanks and Regards

An HBITMAP is a device-dependent bitmap: its internal representation depends on the color format of your screen.
Accordingly, if you set your display color format to 32 bits per pixel (bpp), then your bitmap will use 32 bpp. If you switch your color format to 16 bpp, the bitmap will follow and use 16 bpp.
You formula is correct, you have to take bmBitsPixel into account when computing the bitmap size.

A HBITMAP can be loaded as a Device-Independent-Bitmap when using LoadImage API by specify the LR_CREATEDIBSECTION flag along with the other flags - without it, Windows is converting to a device-dependent bitmap. This will only work when the source image is a 32BPP bitmap. Lower bit-rates (8BPP, 16BPP, 24BPP etc.) will be loaded with the EXACT bit-planes/color depths - which would have to be converted to the monitor's color depth to actually display it.
Because no processing occurs, you may get a 32BPP BMP that is not pre-multiplied for alpha-rendering (AlphaBlend() function) so you will get color fringing and other unwanted artifacts. For those cases, you need to do the pre-multiply on every pixel. The following is a small snip of code - but does not do too much error checking... you will need to test that the BITMAP is of the correct plane/color depth before allowing this code to execute. There are a few ways to optimize the code below (such as using a lookup table), but this is for explanation purposes mainly.
This code can only work IF the bm.bmBits pointer is not NULL, bm.bmPlanes equals 1, and bmBitsPixel equals 32:
RGBQUAD* lprgbSrc = (RGBQUAD*)bm.bmBits;
if( lprgbSrc )
{
RGBQUAD* lprgbEnd = (RGBQUAD*)((size_t)lprgbSrc + (size_t)bm.bmHeight*bm.bmWidthBytes);
while( lprgbSrc != lprgbEnd )
{
switch(lprgbSrc->rgbReserved)
{
case 255: // Pixel at full opacity - no color shift required...
break;
case 0: // Pixel at full transparency - must go full black
*(DWORD*)lprgbSrc = 0;
break;
// Need to pre-multiply by the alpha (rgbReserved) and
// divide by 255 to get a correct brightness level for correct
// rendering of the color when mixed on top of the background
default:
lprgbSrc->rgbRed = ((size_t)lprgbSrc->rgbRed * (size_t)lprgbSrc->rgbReserved) /255;
lprgbSrc->rgbBlue = ((size_t)lprgbSrc->rgbBlue * (size_t)lprgbSrc->rgbReserved) /255;
lprgbSrc->rgbGreen = ((size_t)lprgbSrc->rgbGreen * (size_t)lprgbSrc->rgbReserved) /255;
break;
}
lprgbSrc++;
}
}
Note that certain Windows GDI functions accept non-premultiplied HBITMAP (ImageList for instance) when certain flags are applied.

The LoadImage function not working because it needs a positive height. Some bitmap images are saved with a -height value so that the image will start at the lower left corner. The LoadImage function VC++ 6.0 MFC was not programmed for negative heights so it fails and just returns NULL. Just change the biheight in structure BITMAPINFOHEADER to a positive value. The LoadImage will then open just about any bitmap 8bit, 24bit or 32bit with a positive biheight.
BITMAPFILEHEADER m_bmfHeader;
BITMAPINFOHEADER m_bi;
HANDLE hFile = CreateFile(image_filename,
GENERIC_READ,
0,
NULL,OPEN_EXISTING,
FILE_ATTRIBUTE_NORMAL, NULL);
if(hFile == INVALID_HANDLE_VALUE)
{
AfxMessageBox("Cannot Open a New File");
return;
}
DWORD dwBytesWritten = 0;
ReadFile(hFile, (LPSTR)&m_bmfHeader, sizeof(BITMAPFILEHEADER), &dwBytesWritten, NULL);
ReadFile(hFile, (LPSTR)&m_bi, sizeof(BITMAPINFOHEADER), &dwBytesWritten, NULL);
int m_nSizeImage = m_bi.biSizeImage;
BYTE *lpbitmap;
lpbitmap = (BYTE*)malloc(m_nSizeImage);
ReadFile( hFile, (LPSTR)lpbitmap, m_nSizeImage, &dwBytesWritten,NULL);
CloseHandle(hFile);
hFile = CreateFile(image_filename, GENERIC_WRITE, 0, NULL, CREATE_ALWAYS, FILE_ATTRIBUTE_NORMAL, NULL);
DWORD dwBytesWritten = 0;
m_bi.biHeight = (int)fabs(m_bi.biHeight); //Height Always Positive!!!
WriteFile(hFile, (LPSTR)&m_bmfHeader, sizeof(BITMAPFILEHEADER), &dwBytesWritten, NULL);
WriteFile(hFile, (LPSTR)&m_bi, sizeof(BITMAPINFOHEADER),&dwBytesWritten, NULL);
WriteFile(hFile, (LPSTR)lpbitmap, m_bi.biSizeImage, &dwBytesWritten, NULL);
CloseHandle(hFile);
free(lpbitmap); // Now you can use the LoadImage(...)

Related

Scaled Layers in GDI

Original question
Basically, I have two bitmaps, and I want to put one behind the other, scaled down to half its size.
Both are centered, and are of the same resolution.
The catch is that I want to put more than one bitmap on this back layer eventually, and want the scaling to apply to the whole layer and not just the individual bitmap.
My thought is I would use a memory DC for the back layer, capture its contents into a bitmap of its own and use StretchBlt to place it in my main dc
The code I have right now doesn't work, and I can't make sense of it, let alone find anyone who had done this before for direction.
My variables at the moment are as follows
hBitmap - back bitmap
hFiller - front bitmap
hdc - main DC
ldc - back DC(made with CreateCompatibleDC(hdc);)
resh - width of hdc
resv - height of hdc
note that my viewport origin is set to the center
--this part above is solved, with the one major issue being that it does not keep the back layers...
Revised Question
Here's my code. Everything works as intended except for the fact that the layers do not properly stack. They seem to erase what is underneath or fill it with black.
For the record this is a direct copy of my code. I explain sections of it but there is nothing missing between the code blocks.
case WM_TIMER:
{
switch(wParam)
{
case FRAME:
If any position or rotation values have changed, the following section of code clears the screen and prepares it to be rewritten
if(reload == TRUE){
tdc = CreateCompatibleDC(hdc);
oldFiller = SelectObject(tdc,hFiller);
GetObject(hFiller, sizeof(filler), &filler);
StretchBlt(hdc, 0-(resh/2), 0-(resv/2), resh, resv, tdc, 0, 0, 1, 1, SRCCOPY);
SelectObject(tdc,oldFiller);
DeleteDC(tdc);
if(turn == TRUE){
xForm.eM11 = (FLOAT) cos(r/angleratio);
xForm.eM12 = (FLOAT) sin(r/angleratio);
xForm.eM21 = (FLOAT) -sin(r/angleratio);
xForm.eM22 = (FLOAT) cos(r/angleratio);
xForm.eDx = (FLOAT) 0.0;
xForm.eDy = (FLOAT) 0.0;
SetWorldTransform(hdc, &xForm);
}
This is the part that only partially works. At a distance of 80 my scale value will make my bitmap 1 pixel by 1 pixel, so I consider this my "draw distance"
It scales properly, but the layers do not stack, as I mentioned above
for(int i=80;i>1;i--){
tdc = CreateCompatibleDC(hdc);
tbm = CreateCompatibleBitmap(hdc, resh, resv);
SelectObject(tdc, tbm);
BitBlt(tdc, 0-(resh/2), 0-(resv/2), resh, resv,hdc,0,0,SRCCOPY);
//drawing code goes in here
ldc = CreateCompatibleDC(hdc);
oldBitmap = SelectObject(ldc,hBitmap);
StretchBlt(tdc,(int)(angleratio*atan((double)128/(double)i)),0,(int)(angleratio*atan((double)128/(double)i)),(int)(angleratio*atan((double)128/(double)i)),ldc,0,0,128,128,SRCCOPY);
SelectObject(ldc,oldBitmap);
DeleteDC(ldc);
BitBlt(hdc, 0, 0, resh, resv, tdc, 0, 0, SRCCOPY);
DeleteObject(tbm);
DeleteDC(tdc);
}
reload = FALSE;
}
This section below just checks for keyboard input which changes the position or rotation of the "camera"
This part works fine and can be ignored
if(GetKeyboardState(NULL)==TRUE){
reload = TRUE;
if(GetKeyState(VK_UP)<0){
fb--;
}
if(GetKeyState(VK_DOWN)<0){
fb++;
}
if(GetKeyState(VK_RIGHT)<0){
lr--;
}
if(GetKeyState(VK_LEFT)<0){
lr++;
}
if(GetKeyState(0x57)<0){
p++;
}
if(GetKeyState(0x53)<0){
p--;
}
}
break;
}
}
break;

GDI Monochrome Bitmap flips bits on each creation of HBITMAP

I've been trying to load a 16bit (A1R5G5B5) BMP from file and use its alpha channel as a bit mask. I have gotten everything to work fine except for one problem that's been troubling me for the past week. It is that when I use CreateDIBitmap to make the 1bit channel with a buffer of bytes the Bitmap created uses the inverses of all its bits only on the first draw. On the next paint the bits flip correctly to the data supplied and remain that way for all the draws after. This behavior is very strange and occurs on all Windows versions, I've tracked it down to having to do with some sort of setting of the HDC and possibly CreateDIBitmap. I've tried many things including setting the foreground and background color of both HDC's before and after to many values but everything I have tried still keeps this behavior.
here is a POC to try:
BITMAPINFOHEADER bmih;
BITMAPINFO bmi;
HBITMAP mask;
PBYTE data;
PBYTE alpha;
SIZE dimension;
void WhenCreated() // WM_CREATE
{
dimension.cx=3;
dimension.cy=1;
alpha=(PBYTE)malloc(1);
data=(PBYTE)malloc(1);
alpha[0]=0xA0; // 0b10100000
}
#define BIN_SCAPE(B,A) (B[0]&(1<<A))?1:0
void WhenPresenting(HDC H) // WM_PAINT
{
printf(
"ALPHA:\t%i %i %i\n",
BIN_SCAPE(alpha,7),
BIN_SCAPE(alpha,6),
BIN_SCAPE(alpha,5)
);
HDC memory;
HBITMAP matter;
memory=CreateCompatibleDC(NULL);
memset(&bmi,0x0,sizeof(BITMAPINFO));
bmi.bmiHeader.biSize=sizeof(BITMAPINFOHEADER);
bmi.bmiHeader.biWidth=dimension.cx;
bmi.bmiHeader.biHeight=dimension.cy;
bmi.bmiHeader.biPlanes=1;
bmi.bmiHeader.biBitCount=1;
bmi.bmiHeader.biCompression=BI_RGB;
memset(&bmih,0x0,sizeof(BITMAPINFOHEADER));
bmih.biSize=sizeof(BITMAPINFOHEADER);
bmih.biWidth=bmi.bmiHeader.biWidth;
bmih.biHeight=bmi.bmiHeader.biHeight;
mask=CreateDIBitmap(
memory,
&bmih,
CBM_INIT,
alpha,
&bmi,
DIB_RGB_COLORS
);
SelectObject(memory,mask);
GetDIBits(memory,mask,0,1,data,&bmi,DIB_RGB_COLORS);
printf(
"DATA:\t%i %i %i\n",
BIN_SCAPE(data,7),
BIN_SCAPE(data,6),
BIN_SCAPE(data,5)
);
StretchBlt(
H,
0,0,128,128,
memory,
0,0,dimension.cx,dimension.cy,
SRCCOPY
);
DeleteDC(memory);
DeleteObject(mask);
}
When the program loads the data displayed is inverse to what is given, subsequent paintings cause the data to fit the data supplied as seen in the console output, there is definitely a flipping of bits happening. My guess is the first HDC supplied may use a different palette than ones all after the first which causes this behavior?
Now it all makes sense it's the palette that is changing.
"biBitCount member is less than 16, the biClrUsed member specifies the actual number of colors the graphics engine or device driver accesses." (from msdn)
If you use a color HDC in the CreateDIBitmap you'll get a color with black and this color will change on each repaint, which will start freaking you out, until you understand it is because you have not set a palette to the HBITMAP because when each HDC is made its color palette is undefined unless specified. You could use SetDIBits but if you want to have it done during CreateDIBitmap try this:
PBITMAPINFO pbmi;
RGBQUAD palette[2];
{
// this will give you white (1) and black (0)
palette[0].rgbBlue=0x00;
palette[0].rgbGreen=0x00;
palette[0].rgbRed=0x00;
palette[1].rgbBlue=0xFF;
palette[1].rgbGreen=0xFF;
palette[1].rgbRed=0xFF;
// using a PBITMAPINFO in order to allocate room for palette
pbmi=(PBITMAPINFO)LocalAlloc(LPTR,sizeof(BITMAPINFO)+sizeof(RGBQUAD)*2); // this technically allocates an extra RGBQUAD
pbmi->bmiHeader.biSize=sizeof(BITMAPINFOHEADER);
pbmi->bmiHeader.biWidth=dimension.cx;
pbmi->bmiHeader.biHeight=dimension.cy;
pbmi->bmiHeader.biPlanes=1;
pbmi->bmiHeader.biBitCount=1;
pbmi->bmiHeader.biCompression=BI_RGB;
pbmi->bmiHeader.biClrUsed=2; // palette is two colors long
pbmi->bmiHeader.biClrImportant=2;
memcpy(pbmi->bmiColors,palette,sizeof(RGBQUAD)*2);
mask=CreateDIBitmap(
memory,
&bmih,
CBM_INIT,
alpha,
pbmi,
DIB_RGB_COLORS
);
}

Using GDI+ in C - gdiPlusStartup function returning 2

I am attempting to use GDI+ in my C application to take a screenshot and save it as JPEG. I am using GDI+ to convert the BMP to JPEG but apparently when calling the GdiplusStartup function, the return code is 2(invalid parameter) instead of 0:
int main()
{
GdiplusStartupInput gdiplusStartupInput;
ULONG_PTR gdiplusToken;
//if(GdiplusStartup(&gdiplusToken, &gdiplusStartupInput, NULL) != 0)
// printf("GDI NOT WORKING\n");
printf("%d",GdiplusStartup(&gdiplusToken, &gdiplusStartupInput, NULL));
HDC hdc = GetDC(NULL); // get the desktop device context
HDC hDest = CreateCompatibleDC(hdc); // create a device context to use yourself
// get the height and width of the screen
int height = GetSystemMetrics(SM_CYVIRTUALSCREEN);
int width = GetSystemMetrics(SM_CXVIRTUALSCREEN);
// create a bitmap
HBITMAP hbDesktop = CreateCompatibleBitmap( hdc, width, height);
// use the previously created device context with the bitmap
SelectObject(hDest, hbDesktop);
// copy from the desktop device context to the bitmap device context
// call this once per 'frame'
BitBlt(hDest, 0,0, width, height, hdc, 0, 0, SRCCOPY);
// after the recording is done, release the desktop context you got..
ReleaseDC(NULL, hdc);
// ..and delete the context you created
DeleteDC(hDest);
SaveJpeg(hbDesktop,"a.jpeg",100);
GdiplusShutdown(gdiplusToken);
return 0;
}
I am trying to figure out why the GdiplusStartup function is not working.
Any thoughts?
Initialize gdiplusStartupInput variable with the following values: GdiplusVersion = 1, DebugEventCallback = NULL, SuppressBackgroundThread = FALSE, SuppressExternalCodecs = FALSE
According to MSDN article GdiplusStartup function http://msdn.microsoft.com/en-us/library/windows/desktop/ms534077%28v=vs.85%29.aspx
GdiplusStartupInput structure has default constructor which initializes the structure with these values. Since you call the function from C, constructor is not working and structure remains uninitialized. Provide your own initialization code to solve the problem.
// As Global
ULONG_PTR gdiplusToken;
// In top of main
GdiplusStartupInput gdiplusStartupInput;
GdiplusStartup(&programInfo.gdiplusToken, &gdiplusStartupInput, NULL);
works for me.

Saving BitmapSource as Tiff encoded JPEG using Libtiff.net

I'm trying to write a routine that will save a WPF BitmapSource as a JPEG encoded TIFF using LibTiff.net. Using the examples provided with LibTiff I came up with the following:
private void SaveJpegTiff(BitmapSource source, string filename)
{
if (source.Format != PixelFormats.Rgb24) source = new FormatConvertedBitmap(source, PixelFormats.Rgb24, null, 0);
using (Tiff tiff = Tiff.Open(filename, "w"))
{
tiff.SetField(TiffTag.IMAGEWIDTH, source.PixelWidth);
tiff.SetField(TiffTag.IMAGELENGTH, source.PixelHeight);
tiff.SetField(TiffTag.COMPRESSION, Compression.JPEG);
tiff.SetField(TiffTag.PHOTOMETRIC, Photometric.RGB);
tiff.SetField(TiffTag.ROWSPERSTRIP, source.PixelHeight);
tiff.SetField(TiffTag.XRESOLUTION, source.DpiX);
tiff.SetField(TiffTag.YRESOLUTION, source.DpiY);
tiff.SetField(TiffTag.BITSPERSAMPLE, 8);
tiff.SetField(TiffTag.SAMPLESPERPIXEL, 3);
tiff.SetField(TiffTag.PLANARCONFIG, PlanarConfig.CONTIG);
int stride = source.PixelWidth * ((source.Format.BitsPerPixel + 7) / 8);
byte[] pixels = new byte[source.PixelHeight * stride];
source.CopyPixels(pixels, stride, 0);
for (int i = 0, offset = 0; i < source.PixelHeight; i++)
{
tiff.WriteScanline(pixels, offset, i, 0);
offset += stride;
}
}
MessageBox.Show("Finished");
}
This converts the image and I can see a JPEG image but the colours are messed up. I'm guessing I'm missing a tag or two for the TIFF or something is wrong like the Photometric interpretation but am not entirely clear on what is needed.
Cheers,
It's not clear what do you mean by saying " colours are messed up" but probably you should convert BGR samples of BitmapSource to RGB ones expected by LibTiff.Net.
I mean, make sure the order of color channels is RGB (most probably, it's not) before feeding pixels to WriteScanline method.

WinAPI get mouse cursor icon

I want to get the cursor icon in Windows.
I think language I use isn't very important here, so I will just write pseudo code with WinAPI functions I'm trying to use:
c = CURSORINFO.new(20, 1, 1, POINT.new(1,1));
GetCursorInfo(c); #provides correctly filled structure with hCursor
DrawIcon(GetWindowDC(GetForegroundWindow()), 1, 1, c.hCursor);
So this part works fine, it draws current cursor on active window.
But that's not what I want. I want to get an array of pixels, so I should draw it in memory.
I'm trying to do it like this:
hdc = CreateCompatibleDC(GetDC(0)); #returns non-zero int
canvas = CreateCompatibleBitmap(hdc, 256, 256); #returns non-zero int too
c = CURSORINFO.new(20, 1, 1, POINT.new(1,1));
GetCursorInfo(c);
DrawIcon(hdc, 1, 1, c.hCursor); #returns 1
GetPixel(hdc, 1, 1); #returns -1
Why doesn't GetPixel() return COLORREF? What am I missing?
I'm not very experienced with WinAPI, so I'm probably doing some stupid mistake.
You have to select the bitmap you create into the device context. If not, the GetPixel function will return CLR_INVALID (0xFFFFFFFF):
A bitmap must be selected within the device context, otherwise, CLR_INVALID is returned on all pixels.
Also, the pseudo-code you've shown is leaking objects badly. Whenever you call GetDC, you must call ReleaseDC when you're finished using it. And whenever you create a GDI object, you must destroy it when you're finished using it.
Finally, you appear to be assuming that the coordinates for the point of origin—that is, the upper left point—are (1, 1). They are actually (0, 0).
Here's the code I would write (error checking omitted for brevity):
// Get your device contexts.
HDC hdcScreen = GetDC(NULL);
HDC hdcMem = CreateCompatibleDC(hdcScreen);
// Create the bitmap to use as a canvas.
HBITMAP hbmCanvas = CreateCompatibleBitmap(hdcScreen, 256, 256);
// Select the bitmap into the device context.
HGDIOBJ hbmOld = SelectObject(hdcMem, hbmCanvas);
// Get information about the global cursor.
CURSORINFO ci;
ci.cbSize = sizeof(ci);
GetCursorInfo(&ci);
// Draw the cursor into the canvas.
DrawIcon(hdcMem, 0, 0, ci.hCursor);
// Get the color of the pixel you're interested in.
COLORREF clr = GetPixel(hdcMem, 0, 0);
// Clean up after yourself.
SelectObject(hdcMem, hbmOld);
DeleteObject(hbmCanvas);
DeleteDC(hdcMem);
ReleaseDC(hdcScreen);
But one final caveat—the DrawIcon function will probably not work as you expect. It is limited to drawing an icon or cursor at the default size. On most systems, that will be 32x32. From the documentation:
DrawIcon draws the icon or cursor using the width and height specified by the system metric values for icons; for more information, see GetSystemMetrics.
Instead, you probably want to use the DrawIconEx function. The following code will draw the cursor at the actual size of the resource:
DrawIconEx(hdcMem, 0, 0, ci.hCursor, 0, 0, 0, NULL, DI_NORMAL);

Resources